Understanding the semantics of Web content is at the core of many applications, ranging from Web search, news aggregation and machine translation to personal assistant services such as Amazon Echo, Cortana, Siri, and Google Home. Latent Semantics utilizes a rich suite of information retrieval and machine learning techniques that capture meaning through powerful statistical neural network-based methods like word2vec and node2vec. Recently, such emerging semantic models have achieved state-of-the-art results in several predictive applications (e.g. recommendation, node classification, knowledge graph completion) relevant not just to the broader World Wide Web research community, but also allied communities such as Semantic Web, data mining and natural language processing. In the LSW workshop, we explore the convergence of latent semantics (LS) models and the Web. We explore several aspects of LS models that are particularly relevant to the Web, namely
• Novel methods, including embedding methods, that take into account the specific properties of the Web (e.g., link structure, multimedia content…)
• Evaluation of LS methods, especially in a Web context
• Intersection of LS models with traditional ontological semantics
• Reasoning about such models in a rigorous way
•Extending the scope of these models with techniques such as zero-shot learning and transfer learning.