In the beginning of August, took place in Vancouver the annual meeting of the Association of Computational Linguistic (ACL). The area is interested in how to understand, extract information and generate verbal and textual communication. As one of the most important conferences in Computational Linguistics, its the place where experts meet to discuss the challenges ahead in the area. The applications are everywhere, from extracting information from a news article to how understanding empathy can improve our communication with robots. We participated, presenting a poster at the SEMEVAL competition. These are the areas and talks that we found especially interesting:
Information extraction and Question-answering papers
Common Knowledge acquisition. NLP models extract semantic information from text, that is, from the information we explicitly talk about. But there is a lot of very common knowledge that we almost never explicitly state. For instance, if robots want to know if holding an apple will be easier than holding a watermelon, it is very difficult from common conversations on the subject, since very occasionally we refer to their size and weight. But this information is of extreme importance for artificial intelligence applications when dealing with real life scenarios, since you cannot input to the system the information of all objects. Knowledge needs from memorizing facts, experience things and integrating concepts from the world. A very interesting presentation (and paper) on this subject was given by James F. Allen. The key observation of their work is that it is easier to learn this common knowledge by making humans compare objects that are very similar (apples with apples) instead of comparing with any other object (apples to trucks). In order to advance in this subject, they propose a new task (based on Mechanical Turk) for entity comparison. They also compare neural models with their ‘semantic approach’, based on population of a knowledge basis and the construction of an entity ordering lattice. It is surprising that the semantic models perform better that the neural ones. See  for more details.
Multimodal information extraction. Mirella Lapata was the invited speaker on Wednesday. She presented a very original perspective to multimodal learning : Many sources that are now on the web are not accessible through search because they are not in a text format. Think for instance, of multimedia content, excel sheets (databases in general), or code. She states that the only way of accessing all this information in an intuitive way is by translating all this multimodal information into natural language. An obvious example is to associate to image a textual description of what they represent. She presented three examples that she is currently working on: source code generation, automatic movie reviewing and text simplification (makes it more readable for kids, for instance).
Dialog and chat bots
Chat bots with empathy. Pascale Fung was an invited speaker at the WiML workshop. She presented her last works on how to create empathic chat bots. Empathy is the capacity to understand and share other people’s feelings. So, the main question behind her research is: can we understand and predict the sentiment of a crowd or financial market? maybe even a music genre? This has a huge number of applications, for instance, to analyze the psychological state of a person for medical reasons, or in the financial sector, for helping an asset manager with his/her investments. She presented ‘Zara the Supergirl’ the first chat bot that is empathic. During the demo, we could observe how Zara was able to interact in a natural way with Dr. Fung by understanding her interests, and counseling on her concerns. This was done, partially, thanks to a new word embedding that she presented allowed to detect emotions more easily. The demo can be now tested online .
The life achievement award was given to Barbara Grosz, well known for her work on the foundations of dialog. Dialog has been identified in many occasions as a measure of intelligence ( for instance). In real life scenarios, dialog is complex: there may be several conversations at the same time, only identifiable by the intonation. She presented some other examples of complex discourse structures, and her computational models. The main challenges she identified were (1)ethics, bots need to be transparent with their users, (2)data, there are very few data sets of real life conversations from different cultures and countries.
Many advances on very different areas of computational linguistics were present, but there was a evident anguish in the community regarding the role of linguistics in the Deep Learning era. Can deep learning techniques alone, solve all problems in natural language processing given enough data? Depending on the talk you would get a ‘yes’ or a ‘no’ answer. The most explicit about this issue was Mirella Lapata on her talk. In Figure 1 we can see the painting The Scream by E. Munch with some of the questions that the community is posing itself. Still, her answer during the talk was that although deep learning is here to stay, natural language knowledge will be driving the advances in the area, by for instance defining rewards that are relevant and intuitive.
O. Bakhshandeh, JF Allen. Apples to apples : Learning Semantics of common entities through a novel comprehension task. Conf. ACL 2017, Vancouver.
 Yu et al. Improved neural relation detection for KB question answering. Conf. ACL 2017, Vancouver.
 Turing, Alan M. Computing machinery and intelligence. Mind, 1950, vol. 59, no 236, p. 433-460.