Website Existential Risk Observatory

Reducing existential risk by informing the public debate.

Existential Risk Observatory is looking for a full-time or part-time Research Intern interested in how to pause AI.

Overview:

  • Internship (3 months for full-time work)
  • Full-time internship, with possibility of part-time
  • Language: English
  • Location: Remote, with occasional in-person collaboration in Amsterdam
  • Salary: €1,000/month for a full-time internship
  • Start date: flexible
  • Closing date: until filled

Job Description

You will independently perform research aiming to answer the question: how can we pause AI effectively for as long as it is needed? To do so, you will read and process literature, write a report with potentially novel ideas, have meetings with your supervisor, give presentations for interested experts and/or laymen, and contact other researchers outside the organisation.

The work can be done either fully remote or (partially) in-person in our Amsterdam office.

The final deliverable is a report containing all information available leading up to a robust, lengthy implementation of the pause with as little downsides as possible.

As a researcher, you will work mostly independently, without much support from colleagues.

About the pause

Without a clear solution to the giant problem of aligning (both technical and social) superhuman AI, we think this technology should not be built.

Therefore, we and many others have called for a pause in the development of AI beyond a certain capabilities level.

In the short term, such a pause could be implemented fairly straightforwardly (given public awareness and political backing), since leading AI models are at this moment large and expensive to train, and few companies are on the cutting edge. However, eventually, and perhaps soon already, hardware and algorithmic improvements could lead to uncontrollable AI being available to everyone.

How should we enforce a pause in such a scenario?

That’s the question you will help to answer. Solution directions you will look into will include hardware regulation, data regulation, and others.

Role Requirements

We’re mostly looking for someone who is deeply motivated to pause AI and already has a good understanding of why this needs to be done and perhaps ideas on how to do it.

Existing network in the existential risk world would be a significant plus, but adherence to any particular group such as Effective Altruism of Rationalism is not required.

In addition, we are looking for a candidate that has the following:

  • Academic educational level, preferably you’ve finished your Bachelor.
  • Preference for STEM background.
  • Strong analytical skills, demonstrated by academic credentials.
  • Great understanding of AI existential risk, especially loss of control. Ideally you know about the different existential risk threat models by Yudkowsky/Bostrom, Paul Christiano, and Bengio/Hinton, proposed solutions, and where they could break down.
  • You can write a decent report in English.
  • First principles thinker: show us where you have used your amazingly good world model to come up with an original result.
  • Self-starter: you don’t mind to work without much supervision and without close collaboration with colleagues. Ideally, you have shown that you can work well independently and get things done.

How to apply?

Please send your motivation letter and CV to info@existentialriskobservatory.org.

We review applications on a rolling basis. So if you are interested in this internship, we encourage you to apply sooner.

To apply for this job email your details to info@existentialriskobservatory.org.