Thank you for attending our OpenAI event!

If you want to learn more or get more involved, there are plenty of ways to do so. If you have any other questions feel free to get in touch with us here.

Courses (on-campus, this academic year 23-24):

Please fill out the expression of interest form(s) to get the earliest updates on when signups open. The courses below are widely considered to be the gold standard for offering an introduction to the field of AI Safety. The curricula have been carefully crafted and are regularly updated by experts from companies such as Deepmind and OpenAI.

rough curriculum here

rough curriculum here

rough curriculum here

Help Organize AISIG

We are always looking for more enthusiastic people to join our team and help out in any capacity! Please don't hesitate to fill out the form below and we will get in touch for an informal meeting to discuss further.

Slides from the presentation

Slides AISIG
Slides OpenAI
Slides AISIG (call-to-action)

Political:

President of European Commission Ursula von der Leyen agrees that mitigating the risk of extinction from AI should be a global priority.
UK Government establishes Frontier AI Taskforce, gathering team of experts with 50 cumulative years of state-of-the-art AI experience within just eleven weeks. The taskforce is backed by £100mn in initial government funding and includes Paul Christiano, who previously ran the language model alignment team at OpenAI.

Yoshua Bengio, winner of the 2018 Turing Prize (commonly referred to as the “Nobel Prize” equivalent for Computer Science/AI) appointed to United Nations’ Scientific Advisory Board after making public announcement that he is concerned ‘Rogue AI’ poses existential threat.

Senator Richard Blumenthal opens senate hearing with Prof. Yoshua Bengio, Prof. Stuart Russell and Dario Amodei (AI Lab Anthropic CEO) by stating that he is concerned about “an intelligence device out of control, autonomous self-replicating, potentially creating diseases, pandemic grade viruses, or other kinds of evils, purposely engineered by people or simply the result of mistakes, not malign intention”.

UN Secretary-General António Guterres embraces calls for a new UN agency on AI in the face of ‘potentially catastrophic and existential risks’.
In 2-3 years, AI tools capable of giving perfect, fully-detailed instructions to carry out large scale biological attacks, states Dario Amodei (AI Lab Anthropic CEO) at U.S. Senate hearing.

Commercial:

Google’s next-generation foundation model, Gemini, is said to be released by the end of this year and trained on 5x the computational power used for GPT-4.

Within 18 months models will be trained with 100x the computational power used for GPT4, claims Inflection.ai founder Mustafa Suleyman (co-founder of AI Lab DeepMind).

Google DeepMind is hiring for engineers to create “increasingly autonomous language agents”.

Meta likely intends to continue releasing huge open-source models to the entire public, despite possible associated risks.

Chinese hackers exploit Microsoft software to access 100,000s of U.S. government emails. Given that GPT-4 and other AI systems are trained on Microsoft servers, this historic breach raises questions about the cybersecurity of AI systems.

AI Safety research:

Study shows “Jailbreak” prompts which ‘force’ large language models (like ChatGPT) to output harmful content being generated in an entirely automated fashion.

A Taxonomy and Analysis of Societal-Scale Risks from AI’, co-authored by Stuart Russell, author of ‘the most popular artificial intelligence textbook in the world’.

Epoch AI releases interactive model that can be used to predict the arrival of ‘Transformative AI’, based on neural scaling laws.

Video recordings released from two-day workshop on AI Alignment in February 2023, attended by 80 of the world’s leading machine learning researchers, with opening talk from Ilya Sutskever stating that “AGI is no longer a dirty word”.

Miscellaneous:

To mark six months passing since the “pause giant AI experiments” open letter was published, the Future of Life Institute releases:a list of questions that must be answered by AI companies to inform the publica list of policy proposals to steer AI toward benefiting humanity, and a set of specific recommendations for the upcoming UK AI Safety Summit.

Yoshua Bengio shares writing on the ‘Personal and Psychological Dimensions of AI Researchers Confronting AI Catastrophic Risks

AI Capabilities research:

AI can be used on itself to generate prompts that maximise task accuracy.

Using algorithmic instructions, large language models (like ChatGPT) can in fact have high accuracy on tasks such as nineteen-digit addition

If you would like to learn more about AI Safety or our mission at AISIG, we have written an extensive piece with additional resources here.

A great Youtube channel to explore is AI Explained.

You can sign up to our newsletter here to stay informed.

Learn More

Recent Headlines