Last updated Feb 24, 2021
As of February 2021, we have been running for 33 months. Many grantees are still at a relatively early stage with their work and careers. Below we have collated a list of outputs that people have volunteered to share thus far.
This should not be interpreted as an exhaustive list of everything of value that the Centre for Enabling EA Learning & Research (CEEALAR) has produced. We have only included things for which the value can be independently verified. This list likely captures less than half of the actual value.
Money: So far ~£184,000* has been spent on hosting our residents, of which ~£26,000 was contributed by residents. Everything below is a result of that funding.
Time: ~12,600 person-days spent at CEEALAR.
Summary of Outputs
- The incubation of 3 EA projects with potential for scaling (including CEEALAR);
- 23 online course modules followed;
- 2.5 online course modules produced;
- 102 posts on the EA Forum, Less Wrong and the AI Alignment Forum (with a total of ~2500 karma);
- 5 papers published, 2 preprints, 1 submission and 1 revision;
- 23 other pieces of writing, including blogs, reports and talks;
- 5 code repositories contributed to;
- 5 AI Safety / X-risk events, 1 rationality workshop and 2 EA retreats organised and hosted; 2 EA retreats organised;
- 3 internships and 5 jobs earned at EA organisations; 2 PhD places earned.
Full list of outputs below.
- Title with link [C] (K)
C = counterfactual likelihood of happening without the CEEALAR.
K = Karma on EA Forum, Less Wrong, (Less Wrong; Alignment Forum).
AI Safety relatedDiscuss
(Note that these will appear again below under the main organizers/lecturers’ names)
- AI Safety Learning By Doing Workshop (August 2019)
- AI Safety Technical Unconference (August 2019) (retrospective written by a participant)
- AI Safety Learning By Doing Workshop (October 2019)
A job at the Machine Intelligence Research Institute (MIRI) (following a 3 month work trial).
Road to AI Safety Excellence (RAISE):
Nearly the entirety of this online course was created by grantees.
- RAISE lessons on Inverse Reinforcement Learning + their Supplementary Material [1%] (68)
- RAISE lessons on Fundamentals of Formalization [5%] (32)
- Coauthored the paper Categorizing Wireheading in Partially Embedded Agents, and presented a poster at the AI Safety Workshop in IJCAI 2019 [15%]
“I think the biggest impact EA Hotel did for me, was about self growth. I got a lot of help to improve, but also the time and freedom to explore. I tried some projects that did not lead anywhere, like Code to Give. But getting to explore was necessary for me to figure out what to do. I finally landed on organising, which I’m still doing. AI Safety Support probably would not have existed with out the hotel.”
- Optimization Regularization through Time Penalty [0%] (12; 6) (“This post resulted from conversations at the EA Hotel [CEEALAR] and would not therefore not have happened without the hotel.”)
- The Game Theory of Blackmail (23; 6) (“I don’t remember where the ideas behind this post came from, so it is hard for me to say what the counterfactual would have been. However, I did get help improving the post from other residents, so it would at least be less well written without the hotel.“)
- Organized the AI Safety Learning By Doing Workshop (August and October 2019)
- Organized the AI Safety Technical Unconference (August 2019) (retrospective written by a participant)
“I’ve still got a few more posts on infinity to write up, but here’s the posts I’ve made on LessWrong since arriving [with estimates of how likely they were to be written had I not been at the hotel [CEEALAR]]”:
- Summary: Surreal Decisions [50%] (27)
- An Extensive Categorisation of Infinite Paradoxes [80%] (-2)
- On Disingenuity [50%] (33)
- On Abstract Systems [50%] (15)
- Deconfusing Logical Counterfactuals [75%] (27; 5)
- Debate AI and the Decision to Release an AI [90%] (10)
“At the hotel [CEEALAR], I was working on projects and self-study to prep for seeking a machine learning job. I think the 6 months I spent there helped my resume & skills become significantly stronger for this kind of job than they would’ve been otherwise. I also optimized for acquiring ML knowledge relevant to AI Safety and EA-related machine learning project ideas of mine, and this effort felt pretty successful. After my stay, I was unexpectedly offered a high-paying remote job which let me set my own hours, but didn’t have anything to do with machine learning. After extensive consideration of the pros & cons, I took the job. I’m now planning to do that part-time from a low cost of living location, and spend the rest of my time studying ML with a stronger AI Safety focus, plus writing up some ideas of mine related to AI Safety. Although the things I did at the hotel didn’t help me get this sweet remote job, the learning and thinking I did felt quite valuable on its own. My time spent at the hotel provided further evidence to me that I’m capable of self-directed study & research. I also decided that further direct optimization for industry career capital won’t help me a lot in thinking about AI Safety better–this was part of why I didn’t go for a machine learning role as originally planned. I’ve donated thousands of dollars to the hotel, and I’m happy to chat with donors considering donations of $1000 or greater regarding the pros & cons of the hotel as a giving opportunity.”
- Improving Your Statistical Inferences (21 hours)
- MITx Probability
- Statistical Learning
- Formal Software Verification
- ARIMA Modeling with R
- Introduction to Recommender Systems (20-48 hours)
- Text Mining and Analytics
- Introduction to Time Series Analysis
- Regression Models
- Annihilating aliens & Rare Earth suggest early filter (8)
- Believing others’ priors (9)
- AI development incentive gradients are not uniformly terrible (23)
- Should donor lottery winners write reports? (29)
- Distance Functions are Hard (40; 14)
- What are concrete examples of potential “lock-in” in AI research? (18; 10)
- Non-anthropically, what makes us think human-level intelligence is possible? (10)
- The Moral Circle is not a Circle (17)
- Cognitive Dissonance and Veg*nism (7)
- What are we assuming about utility functions? [1%] (18;8)
- 8 AIS ideas [1%] (N/A)
- Some Comments on “Goodhart Taxonomy” [1%] (9;4)
- Critiquing “What failure looks like” [1%] (26;11; featured in MIRI’s Jan 2020 Newsletter)
- Course: Python Programming: A Concise Introduction [20%]
- Enrolled in the Open University’s bachelor degree Computing & IT and Design.
- Code for Style Transfer, Deep Dream and Pix2Pix implementation [5%]
- Code for lightweight Python deep learning library [5%]
- NLP implementations [5%]
- Thinking of tool AIs [0%] (5)
- Wireheading and discontinuity [25%] (22;10)
- Contributions to the sequence Thoughts on Goal-Directedness [25%]
- Goals and short descriptions [25%] (9;5)
- Postponing research can sometimes be the optimal decision [25%] (28)
- Decision Theory is multifaceted [25%] (6;4)
- Contribution to Literature Review on Goal-Directedness [20%] (53;27)
Global Catastrophic Risks relatedDiscuss
- Coauthored the paper Long-Term Trajectories of Human Civilization [99%]
- Joined the design team for the upcoming AI Strategy role-playing game Intelligence Rising and organised a series of events for testing the game [15%]
- Incorporated Convergence [95%]
- Applied for 501c3 non-profit status for Convergence [non-profit status approved in 2019] [95%]
- Built new website for Convergence [90%]
- Designed Convergence presentation (slides, notes) and held it at the Future of Humanity Institute [80%]
- Defined a recruitment plan for a researcher-writer role and publicized a job ad [90%]
- Organised AI Strategy and X-Risk Unconference (AIXSU) [1%]
- The ‘far future’ is not just the far future [99%] (29)
- State Space of X-Risk Trajectories [95%] (24)
- Collaborated on 14 other Convergence publications [97%]
- Got non-profit status for Convergence Analysis and established it legally [90%]
- Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more details: Convergence publications) [80%] (~30 average)
(First two with coauthors; first and third mostly written before arriving at CEEALAR.)
- Using vector fields to visualise preferences and make them consistent [90%] (41; 3)
- Four components of strategy research [50%] (18)
- Value uncertainty [95%] (17)
- Food Crisis – Cascading Events from COVID-19 & Locusts (97)
- Helped Launch the Food Systems Handbook (announcement)
Rationality or community building relatedDiscuss
(Note that these will appear again below under the main organizers/lecturers’ name)
- EA London Retreats: Life Review Weekend (Aug. 24th – 27th 2018); Careers Week (Aug. 27th – 31st 2018); Holiday/EA Unconference (Aug. 31st – Sept. 3rd 2018)\
- EA Glasgow (March 2019)
- Athena Rationality Workshop (June 2019) (retrospective)
- Researched and developing presentations and workshops in Rational Compassion: see How we might save the world by becoming super-dogs [0%]
- Helped organise the EA Values-to-Actions Retreat [33%]
- Helped organise the EA Community Health Unconference [33%]
- Becoming Interim Community & Projects Manager at CEEALAR and offering residents counseling/coaching sessions (productivity & mental health) [0%]
- Interned as a mental health research analyst at Charity Entrepreneurship [50%]
- Incubatee and graduate of Charity Entrepreneurship 2020 Incubation Program [50%]
- EA is vetting-constrained [10%] (106)
- The Home Base of EA [90%] (21)
- Task Y: representing EA in your field [90%] (11)
- We can all be high status [10%] (61)
- The housekeeper [10%] (26)
- What makes a good culture? [90%] (30)
- Organizer and instructor for the Athena Rationality Workshop (June 2019)
- The entirety of Project Metis [5%]
- Posts on LessWrong:
- The 3 Books Technique for Learning a New Skill [5%] (157)
- A Framework for Internal Debugging [5%] (42)
- S-Curves for Trend Forecasting [5%] (103)
- What Vibing Feels Like [5%] (17)
- How to Understand and Mitigate Risk [5%] (50)
- Lottery Ticket Hypothesis [5%]
- What I Learned Dropping Out of High School [1%]
- The End of Education [1%]
- My Quarantine Reading List [1%]
- How Steven Pinker Could Become Really Rich [1%]
- Questions to Guide Life and Learning [1%]
- Online Standardized Tests Are a Bad Idea [1%]
- The Single Best Policy to Combat Climate Change [1%]
- The Public Good of Education [1%]
- Trillion Dollar Bills on the Sidewalk [1%]
- On Schools and Churches [1%]
- Harari on Religions and Ideologies [1%]
- Students are Employees [1%]
- Patrick Collison on Effective Altruism [1%] (60)
- List of Peter Thiel’s Online Writings [5%]
- Books I’ve Been Reading [1%]
- Blogs I’ve Been Reading [1%]
- The Best Educational Institution in the World [1%] (11)
- The Case for Education [1%] (13)
Global health, development and welfare relatedDiscuss
- Huitfeldt, A., Swanson, S. A., Stensrud, M. J., & Suzuki, E. (2019). Effect heterogeneity and variable selection for standardizing causal effects to a target population. European Journal of Epidemiology. https://doi.org/10.1007/s10654-019-00571-w
- Effect heterogeneity and external validity (6)
- Effect heterogeneity and external validity in medicine (48)
“Even though I only spent 2-3 weeks at the EA Hotel [CEEALAR], I found my time there to be extremely valuable in thinking more deeply about particular areas of EA, discussing potential career plans/ paths to impact with other guests and to make some progress on my research work. Aside from being a particularly enjoyable and productive time personally, I’ve since gone on to start a PhD in game theory/ mechanism design related to longtermist considerations as well as spending time conducting research at three different EA organisations and I think this was all made substantially more likely (i.e. at least 20% more likely) due to my brief stay at the hotel.”
- Priority Setting in Healthcare Through the Lens of Happiness – Chapter 3 of the 2019 Global Happiness & Wellbeing Policy Report. [99%]
- Hired as a research analyst for Rethink Priorities [95%].
- Rethink Grants: an evaluation of Donational’s Corporate Ambassador Program [95%] (54)
- Evaluating use cases for human challenge trials in accelerating SARS-CoV-2 vaccine development. Clinical Infectious Diseases. https://doi.org/10.1093/cid/ciaa935 [70%]
- Some of the content of https://1daysooner.org/ [70%]
- Market-shaping approaches to accelerate COVID-19 response: a role for option-based guarantees? [70%] (39)
- Option-based guarantees to accelerate urgent, high risk vaccines: A new market-shaping approach [Preprint]. https://doi.org/10.31219/osf.io/swd4a [70%]
- Modelling the Health and Economic Impacts of Population-Wide Testing, Contact Tracing and Isolation (PTTI) Strategies for COVID-19 in the UK [Preprint]. https://doi.org/10.2139/ssrn.3627273
- Some confidential COVID-19-related policy reports. [70%]
- Parts of the Survey of COVID-19 Responses to Understand Behaviour (SCRUB) https://www.scrubcovid19.org/
- Pueyo: How to Do Testing and Contact Tracing [Summary] [70%] (7)
- A confidential evaluation of an anxiety app. [90%]
- Various unpublished documents for the Happier Lives Institute. [80%]
“All together I spent approximately 9/10 months in total at the Hotel [CEEALAR] (I had appendicitis and had a few breaks during my stay). The time at the Hotel was incredibly valuable to me. I completed the first year of a Maths degree via The Open University (with Distinction). On top of this, I self-studied Maths and Statistics (a mixture of Open University and MIT Opencourseware resources), covering pre-calculus, single-variable calculus, multivariable calculus, linear algebra, real analysis, probability theory, and statistical theory/applied statistics. This provided me with the mathematics/statistics knowledge to complete the coursework components at top-tier Economics PhD programmes.
The Hotel [CEEALAR] also gave me the time to apply for PhD programmes. Sadly, I didn’t succeed in obtaining scholarships for my target school – The London School of Economics. However, I did receive a fully funded offer to study a two-year MRes in Economics at The University of Glasgow. Conditional upon doing well at Glasgow, the two-year MRes enables me to apply to top-tier PhD programmes afterwards. During my stay, I worked on some academic research (my MSc thesis, and an old anthropology paper), which will help my later PhD applications. I applied for a variety of large grants at OpenPhil and other EA organisations (which weren’t successful). I also applied to a fellowship at Wasteland Research (I reached the final round), which I couldn’t follow up on due to other work commitments (although I hope to apply in the future). Finally, I developed a few research ideas while at the Hotel. I’m now working on obtaining socio-economic data on academic Economists. I’m also planning on running/hosting an experiment that tries to find the most convincing argument for long-termism. These ideas were conceived at the Hotel and I received a lot of feedback/help from current and previous residents.
Counterfactually – if I wasn’t at the Hotel [CEEALAR] – I would have probably only been able to complete half of the Maths/Stats I learned. I probably wouldn’t have applied to any of the scholarships/grants/fellowships because I heard about them via residents at the Hotel. I also probably wouldn’t have had time to focus on completing my older research papers. Similarly, discussions with other residents spurred the new research ideas I’m working on.”
- Distinctions in MU123, MST124, MST125 (Mathematics modules) and M140 (Statistics), The Open University.
- Completed ‘Justice’ (Harvard MOOC; Verified Certificate).
- Completed GV100 (Intro to Political Theory) and MA100 (Mathematical Methods), London School of Economics, [auditing modules].
- Audited M208 (Pure Maths) Linear Algebra and Real Analysis, The Open University.
- Applied to a number of PhD programmes in Economics, and took up a place at Glasgow University
Animal welfare relatedDiscuss
- The Evolution of Sentience as a Factor in the Cambrian Explosion: Setting up the Question [50%] (29)
- Sharks probably do feel pain: a reply to Michael Tye and others [50%] (19)
- Why I’m focusing on invertebrate sentience [75%] (53)
- Interview with Jon Mallatt about invertebrate consciousness [50%] (82; winner of 1st place EA Forum Prize for Apr 2019)
- My recommendations for RSI treatment [25%] (60)
- Thoughts on the welfare of farmed insects [50%] (31)
- Interview with Shelley Adamo about invertebrate consciousness [50%] (37)
- My recommendations for gratitude exercises [50%] (41)
- Interview with Michael Tye about invertebrate consciousness [50%] (32)
- Got a research position (part-time) at Animal Ethics [25%]
- Edited and partially rewrote a book on meat, treatment of farmed animals, and alternatives to factory farming (as a paid job). Can’t yet name the book or its author due to non-disclosure agreement. [70%]
- Published an essay, Re-Orientation, about some of the possible personal and societal implications of sexual orientation conversion drugs that actually work. [90%]
- Wrote an academic philosophy essay about a problem for David Benatar’s pessimism about life and death, and submitted it to an academic journal. It is currently awaiting scores from reviewers. [10%]
- “I got a paid job writing an index for a book by a well-known moral philosopher. This job will help me continue to financially contribute to the EA Hotel [CEEALAR].” [20%]
- Accepted to give a talk and a poster to the academic session of EAG 2020 in San Francisco.
- Applied to a number of PhD programs in Philosophy and took up a place at Oxford University (Researching “Personal Identity, Value, and Ethics for Animals and AIs”). [40%]
- Received an (unpaid) internship at Animal Ethics [1%].
- Rodents farmed for pet snake food [99%] (71)
- Will companies meet their animal welfare commitments? [96%] (112; winner of 3rd place EA Forum Prize for Feb 2019)
- Why Altruists Should Perhaps Not Prioritize Artificial Intelligence: A Lengthy Critique [99%].
- Revising journal paper for Between the Species. (“Got feedback and discussion about it I couldn’t have had otherwise; one reviewer happened to be a guest at the hotel [CEEALAR].”)
- “I got the idea to write the book I’m currently writing (“Suffering-Focused Ethics”) [50%]”.
- EA Forum Comment Prize ($50), July 2019, for “comments on the impact of corporate cage-free campaigns” (11)
Next steps for departing grantees
- Kris Gulati takes up an Economics MRes/PhD at the University of Glasgow.
- Rhys Southan embarks upon a PhD in Philosophy at the University of Oxford.
- Linda Linsefors starts AI Safety Support.
- Davide Zagami goes on to work at AI/computer vision startup HoraVision.
- Chris Leong goes on to work at Triplebyte.
- Ed Wise goes on to complete a Master’s in International Relations (cum laude) at Leiden University.
- Rafe Kennedy starts a job at the Machine Intelligence Research Institute (MIRI).
- Hoagy Cunningham took part in the MIRI Summer Fellows program, and goes on to pursue AI policy opportunities as a civil servant in the UK government.
- Aron Mill enrolls in a Master’s program at TU Berlin.
*this is the total cost of the project to date (1 October 2020), not including the purchase of the building (£132,276.95 including building survey and conveyancing).