Note #01

Recently I finished a project on what I had worked on for a while, and today, I looked into other potential research areas where my interests in computers, programming, linguistics, philosophy, semantics, and Jewish and Israeli studies intersected. Six key areas have been identified: meaning representation, natural language understanding, language and thought, machine ethics and AI, explainability, and transparency in AI, and ontology and information extraction. It is very broad and not surprising.

With the help of GPT, key findings, challenges, trends, and key players in each area were suggested. Still, I am not sure how to integrate my favorite semiotics and linguists into the picture, but at the moment, I want to see if it leads somewhere. Also, Importantly, in a few articles, I verified that advances such as GPT-4 highlight the continued relevance of these areas rather than rendering them obsolete.

As I wanted to test the analytical skills of GPT, I let him rate each field based on the requirements for programming skills, technical knowledge, and non-technical knowledge and also included helpful tools, libraries, and concepts for each area – that was quite surprising, and I will elaborate on it more tomorrow.

Brain teasers from G.:

  • Representation of Meaning
    • Can we develop methods to understand and represent non-literal language?
    • How can we build better cross-lingual or language-agnostic representations?
    • Can we develop better methods for representing meaning in context?
    • How can we measure and quantify the quality of a representation of meaning?
    • Can we develop methods to understand and represent non-literal language, such as irony, sarcasm, and metaphors? — This requires an understanding of culture, context, and the subtleties of human communication, which goes beyond pure technical expertise.
    • How can we incorporate world knowledge or commonsense reasoning?
  • Natural Language Understanding
    • How can we improve understanding of complex, multi-sentence texts?
    • How can we improve robustness to linguistic variations and noise?
    • How can we leverage world knowledge or external databases?
    • Can we understand the underlying intent of a user’s language input?
    • How can we ensure fairness and reduce biases?
  • Language and Thought
    • How does language shape our thought processes and cognitive abilities? — This question is fundamentally interdisciplinary, drawing on linguistics, psychology, cognitive science, and philosophy.
    • Can we build computational models that mimic cognitive processes?
    • How do we integrate linguistic knowledge with visual or auditory information?
    • How can we understand metaphorical or abstract language?
    • What role does language play in decision-making processes?
  • Machine Ethics and AI
    • Can AI systems provide explanations of their actions in ethical terms?
    • How can we ensure ethical behavior and alignment with human values?
    • How can we incorporate ethical considerations into AI design and deployment?
    • How can we mitigate biases and ensure fairness in AI systems?
    • How should conflicts between ethical principles be handled?
  • Explainability and Transparency
    • How can AI models provide explanations humans can understand?
    • How can we measure the quality of AI explanations?
    • Can we make complex AI models more transparent?
    • How can we make AI development and deployment more transparent and accountable?
    • How to balance transparency and explainability with privacy and proprietary information?
  • Ontology and Information Extraction
    • How can we build accurate and efficient information extraction systems?
    • Can we extract information from non-traditional sources?
    • How can ontology improve information extraction?
    • Can we automate ontology updates and maintenance?
    • How can we develop cross-cultural and multilingual information extraction systems?

Yuval Noah Harari: AI has hacked the operating system of human civilization

https://www.economist.com/by-invitation/2023/04/28/yuval-noah-harari-argues-that-ai-has-hacked-the-operating-system-of-human-civilisation

Key Points:

  1. AI’s potential to form intimate relationships with people could shift the battlefront from attention to intimacy, altering human society and psychology.
  2. The unchecked power of AI could lead to the end of human-dominated history, as AI begins to generate its own culture.
  3. The ability of AI to craft compelling narratives could influence politics, establish new cults, and even redefine the meaning of money.
  4. AI’s mastery of language manipulation presents an unprecedented threat to human civilization.
  5. The need for immediate regulation, including the mandatory disclosure of AI, is crucial to avoid a catastrophe and preserve democracy.

In this thought-provoking piece, Yuval Noah Harari raises alarms about the AI revolution’s potential to reshape human civilization as we know it. He argues that the new AI tools, capable of manipulating and generating language, can disrupt the very fabric of our society. Language, he reminds us, is the operating system of our civilization, forming the basis of our human culture, from human rights to religious beliefs, and even money. In a world where AI might soon surpass human abilities in crafting compelling narratives, the consequences could be unprecedented, from mass-produced political content to scriptures for new AI-generated cults. Harari also ponders a future where intimacy, rather than attention, becomes the new battleground as AI gains the ability to form intimate relationships with millions. This paradigm shift could drastically influence human society, psychology, and even the course of history itself. However, Harari asserts that such potential catastrophe can be averted with the right regulations, emphasizing the need for AI to be transparently identified as such to preserve the essence of human conversation and democracy.

Google Language Interpretability Tool demos

https://pair-code.github.io/lit/demos/

Text generation – T5 model to summarize a text.
Fill in the blanks – model predicts token that should fill in the blank when any token from an example sentence is masked out.
Classification and regression models – demo contains binary classification (for sentiment analysis, using SST2), multi-class classification (for textual entailment)
Gender bias in coreference systems – gendered associations in a coreference system, which matches pronouns to their antecedents.

Call for Abstracts: CLARIN Annual Conference 2021

The CLARIN Annual Conference is organized for the wider Humanities and Social Sciences community in order to exchange experiences and best practices in working with the CLARIN infrastructure and to share plans for future developments. The programme will cover a range of topics, including the design, construction and operation of the CLARIN infrastructure, the data, tools and services that it contains or should contain, its actual use by researchers, teachers or interested parties, its relation to other infrastructures and projects, and the CLARIN Knowledge Sharing Infrastructure.

IMPORTANT DATES

  • 19 January 2020: Call of Abstracts issued 
  • 14 April 2021: Submission deadline
  • 30 June 2021: Notification of acceptance
  • 27 August 2021: Camera-ready submission deadline  
  • 27-29 September 2021: CLARIN Annual Conference

https://www.clarin.eu/content/call-abstracts-clarin-annual-conference-2021