Outputs
Last updated Jun 17, 2023
We have been running for just over 5 years. Most grantees are at a relatively early stage with their work and careers. Below we have collated a list of outputs that people have volunteered to share.
This should not be interpreted as an exhaustive list of everything of value that the Centre for Enabling EA Learning & Research (CEEALAR) has produced. We have only included things for which the value can be independently verified. This list likely captures less than half of the actual value.
Total expenses
Money: So far ~£480,000* has been spent on hosting our residents, of which ~£43,000 was contributed by residents. Everything below is a result of that funding.
Time: ~20,900 person-days spent at CEEALAR.
Summary of Outputs
- 7 internships and 12 jobs earned at EA organisations; 2 PhD places earned;
- The incubation of 4 EA projects with potential for scaling (including CEEALAR);
- 41 online course modules followed;
- 2.5 online course modules produced;
- 113 posts on the EA Forum, Less Wrong and the AI Alignment Forum (with a total of ~3300 karma);
- 5 papers published, 2 preprints, 1 submission and 1 revision;
- 33 other pieces of writing, including blogs, reports and talks;
- 8 code repositories contributed to;
- 2 podcasts (with 8 episodes produced at CEEALAR);
- 5 AI Safety / X-risk events, 1 rationality workshop and 3 EA retreats organised and hosted; 5 EA / X-risk retreats organised;
- 24 projects (that don't fit into the above) worked on.
See the other tabs above for highlights, and lists of all outputs sorted by cause area, date, grantee and type (see here for a spreadsheet of all the outputs). The tab "Next steps" shows the next steps of a selection of our grantees, following their stay at CEEALAR.
*this is the total cost of the project to date, not including the purchase of the building (£132,276.95 including building survey and conveyancing).
Key:
- Cause Area - Name of Grantee - Output Type - Title with link[C%*] (K**)
*C% = percentage counterfactual likelihood of happening without CEEALAR.
**K = Karma on EA Forum, Less Wrong, (Less Wrong; Alignment Forum).
2023 Q2
- AI Alignment - Chris Leong - Course - Got accepted to the online training + research sprint component of John Wentworth's SERI MATS stream
- AI Alignment - Jaeson Booker - Course - Got accepted to the online training + research sprint component of John Wentworth's SERI MATS stream
- AI Alignment - Michele Campolo - Course - Got accepted to the online training + research sprint component of John Wentworth's SERI MATS stream
2023 Q1
- AI Alignment - Guillaume Corlouer - Course - Got selected at the PIBBSS Fellowship
- AI Alignment - Hamish Huggard - Project - Video on AI safety
- AI Alignment - Jaeson Booker - Other writing - Book Chapter - Completed authoring two-third of the AI Safety handbook [70%]
- AI Alignment - Jaeson Booker - Project - Created AI Safety Strategy Group' Website
- AI Alignment - Vinay Hiremath - Placement - Job - Selected as a Teaching Assistant for the ML for Alignment Bootcamp
- Meta or Community Building - Hamish Huggard - Project - Built and launched https://aisafety.world/
2022 Q3
- AI Alignment - David Matlosci - Course - got selected to the second phase of SERI MATS and went to California to participate in it
2022 Q2
- AI Alignment - Lucas Teixeira - Placement - Internship - Internship at Conjecture
- AI Alignment - Samuel Knoche - Placement - Job - Becoming a Contractor for Open AI
- AI Alignment - Vinay Hiremath - Placement - Job - Job placement as Teaching Assistant for MLAB (Machine Learning for Alignment Bootcamp)
- Meta or Community Building - Laura C - Placement - Job - Becoming Operations Lead for The Berlin Longtermist Hub [50%]
2022 Q1
- AI Alignment - Peter Barnett - Placement - Internship - Accepted for an internship at CHAI
- AI Alignment - Peter Barnett - Placement - Internship - Accepted into SERI ML Alignment Theory Scholars program
- Meta or Community Building - Simmo Simpson - Placement - Job - Job placement as Executive Assistant to COO of Alvea.
2021 Q2
- AI Alignment - Quinn Dougherty - Placement - Internship - Obtained internship at SERI
- AI Alignment - Quinn Dougherty - Post (EA/LW/AF) - High Impact Careers in Formal Verification: Artificial Intelligence (25)
2021 Q1
- AI Alignment - Quinn Dougherty - Podcast - Technical AI Safety Podcast
2020 Q4
- Meta or Community Building - CEEALAR - Statement - We had little in the way of concrete outputs this quarter due to diminished numbers (pandemic lockdown)
2020 Q3
- Animal Welfare - Rhys Southan - Placement - PhD - Applied to a number of PhD programs in Philosophy and took up a place at Oxford University (Researching “Personal Identity, Value, and Ethics for Animals and AIs”) [40%]
- Meta or Community Building - Denisa Pop - Placement - Internship - Incubatee and graduate of Charity Entrepreneurship 2020 Incubation Program [50%]
2020 Q2
- AI Alignment - Michele Campolo - Post (EA/LW/AF) - Contributions to the sequence Thoughts on Goal-Directedness [25%] (58; 30)
- Global Health and Development - Derek Foster - Post (EA/LW/AF) - Market-shaping approaches to accelerate COVID-19 response: a role for option-based guarantees? [70%] (38)
- Global Health and Development - Kris Gulati - Placement - PhD - Applied to a number of PhD programmes in Economics, and took up a place at Glasgow University
- Meta or Community Building - Denisa Pop - Placement - Internship - Interned as a mental health research analyst at Charity Entrepreneurship [50%]
- X-Risks - Aron Mill - Other writing - Project - Helped Launch the Food Systems Handbook (announcement) (30)
- X-Risks - Aron Mill - Post (EA/LW/AF) - Food Crisis - Cascading Events from COVID-19 & Locusts (97)
2019 Q4
- AI Alignment - Anonymous 2 - Post (EA/LW/AF) - Critiquing “What failure looks like” (featured in MIRI’s Jan 2020 Newsletter) [1%] (35; 17)
- AI Alignment - Rafe Kennedy - Placement - Job - A job at the Machine Intelligence Research Institute (MIRI) (following a 3 month work trial)
- AI Alignment - Samuel Knoche - Code - Code for Style Transfer, Deep Dream and Pix2Pix implementation [5%]
- AI Alignment - Samuel Knoche - Code - NLP implementations [5%]
- AI Alignment - Samuel Knoche - Code - Code for lightweight Python deep learning library [5%]
- Global Health and Development - Anders Huitfeldt - Post (EA/LW/AF) - Post on LessWrong: Effect heterogeneity and external validity in medicine (49)
- Meta or Community Building - Samuel Knoche - Code - Lottery Ticket Hypothesis [5%]
- X-Risks - Markus Salmela - Paper (Published) - Joined the design team for the upcoming AI Strategy role-playing game Intelligence Rising and organised a series of events for testing the game [15%]
2019 Q3
- AI Alignment - - Event - AI Safety - AI Safety Learning By Doing Workshop (August 2019)
- AI Alignment - - Event - AI Safety - AI Safety Technical Unconference (August 2019) (retrospective written by a participant)
- AI Alignment - Anonymous 2 - Post (EA/LW/AF) - Distance Functions are Hard (31; 11)
- AI Alignment - Anonymous 2 - Post (EA/LW/AF) - The Moral Circle is not a Circle (36)
- AI Alignment - Davide Zagami - Paper (Published) - Coauthored the paper Categorizing Wireheading in Partially Embedded Agents, and presented a poster at the AI Safety Workshop in IJCAI 2019 [15%]
- Animal Welfare - Max Carpendale - Post (EA/LW/AF) - Interview with Michael Tye about invertebrate consciousness [50%] (32)
2019 Q2
- Animal Welfare - Max Carpendale - Post (EA/LW/AF) - Thoughts on the welfare of farmed insects [50%] (33)
- Animal Welfare - Max Carpendale - Post (EA/LW/AF) - Interview with Shelley Adamo about invertebrate consciousness [50%] (37)
- Animal Welfare - Max Carpendale - Post (EA/LW/AF) - Interview with Jon Mallatt about invertebrate consciousness (winner of 1st place EA Forum Prize for Apr 2019) [50%] (82)
2019 Q1
- AI Alignment - Davide Zagami - Post (EA/LW/AF) - RAISE lessons on Inverse Reinforcement Learning + their Supplementary Material [1%] (20)
- AI Alignment - Davide Zagami - Post (EA/LW/AF) - RAISE lessons on Fundamentals of Formalization [5%] (28)
- Animal Welfare - Max Carpendale - Post (EA/LW/AF) - The Evolution of Sentience as a Factor in the Cambrian Explosion: Setting up the Question [50%] (29)
- Meta or Community Building - Matt Goldenberg - Post (EA/LW/AF) - A Framework for Internal Debugging [5%] (41)
- Meta or Community Building - Matt Goldenberg - Post (EA/LW/AF) - How to Understand and Mitigate Risk [5%] (55)
- Meta or Community Building - Matt Goldenberg - Post (EA/LW/AF) - S-Curves for Trend Forecasting [5%] (99)
- Meta or Community Building - Toon Alfrink - Post (EA/LW/AF) - EA is vetting-constrained [10%] (125)
2018 Q4
- AI Alignment - Anonymous 1 - Post (EA/LW/AF) - Should donor lottery winners write reports? (29)
- AI Alignment - Chris Leong - Post (EA/LW/AF) - On Disingenuity [50%] (28)
- Animal Welfare - Frederik Bechtold - Placement - Internship - Received an (unpaid) internship at Animal Ethics [1%]
2018 Q3
- Animal Welfare - Max Carpendale - Placement - Job - Got a research position (part-time) at Animal Ethics [25%]
Key:
- Quarter - Name of Grantee - Output Type - Title with link[C%*] (K**)
*C% = percentage counterfactual likelihood of happening without CEEALAR.
**K = Karma on EA Forum, Less Wrong, (Less Wrong; Alignment Forum).
AI Alignment
- 2023 Q2 - Can Rager - Post (EA/LW/AF) - Written a post on Understanding mesa-optimisation using toy models (37)
- 2023 Q2 - Chris Leong - Course - Got accepted to the online training + research sprint component of John Wentworth's SERI MATS stream
- 2023 Q2 - Jaeson Booker - Course - Got accepted to the online training + research sprint component of John Wentworth's SERI MATS stream
- 2023 Q2 - Jaeson Booker - Other writing - Review - Reviewed Policy Report to AIS-safety
- 2023 Q2 - Michele Campolo - Course - Got accepted to the online training + research sprint component of John Wentworth's SERI MATS stream
- 2023 Q2 - Nia Gardner - Event - Organisation - Organised an ML Bootcamp in Germany
- 2023 Q1 - Can Rager - Code - Prepared for and conducted the coding test for the ARENA programme
- 2023 Q1 - Can Rager - Other writing - Report - Developed a research proposal
- 2023 Q1 - Guillaume Corlouer - Course - Got selected at the PIBBSS Fellowship
- 2023 Q1 - Hamish Huggard - Project - Video on AI safety
- 2023 Q1 - Hamish Huggard - Project - Animated the AXRP video "How do neural networks do modular addition?"
- 2023 Q1 - Hamish Huggard - Project - Animated the AXRP video "Vanessa Kosoy on the Monotonicity Principle"
- 2023 Q1 - Hamish Huggard - Project - Animated the AXRP video "What is mechanistic interpretability? Neel Nanda explains"
- 2023 Q1 - Jaeson Booker - Other writing - Book Chapter - Completed authoring two-third of the AI Safety handbook [70%]
- 2023 Q1 - Jaeson Booker - Project - Created AI Safety Strategy Group' Website
- 2023 Q1 - Michele Campolo - Post (EA/LW/AF) - On value in humans, other animals, and AI (7; 3)
- 2023 Q1 - Vinay Hiremath - Placement - Job - Selected as a Teaching Assistant for the ML for Alignment Bootcamp
- 2022 Q3 - David Matlosci - Course - got selected to the second phase of SERI MATS and went to California to participate in it
- 2022 Q2 - Aaron Maiwald - Course - courses on mathematics part of his bachelor's degree in Cognitive Science
- 2022 Q2 - Aaron Maiwald - Course - courses on ML part of his bachelor's degree in Cognitive Science
- 2022 Q2 - Lucas Teixeira - Placement - Internship - Internship at Conjecture
- 2022 Q2 - Michele Campolo - Post (EA/LW/AF) - Some alternative AI safety research projects [10%] (8; 2)
- 2022 Q2 - Samuel Knoche - Placement - Job - Becoming a Contractor for Open AI
- 2022 Q2 - Vinay Hiremath - Placement - Job - Job placement as Teaching Assistant for MLAB (Machine Learning for Alignment Bootcamp)
- 2022 Q1 - David King - Course - AGISF
- 2022 Q1 - Jaeson Booker - Course - AGISF
- 2022 Q1 - Michele Campolo - Course - Attended online course on moral psychology by the University of Warwick.
- 2022 Q1 - Peter Barnett - Placement - Internship - Accepted for an internship at CHAI
- 2022 Q1 - Peter Barnett - Placement - Internship - Accepted into SERI ML Alignment Theory Scholars program
- 2022 Q1 - Peter Barnett - Post (EA/LW/AF) - Thoughts on Dangerous Learned Optimization (4)
- 2022 Q1 - Peter Barnett - Post (EA/LW/AF) - Alignment Problems All the Way Down (26)
- 2022 Q1 - Theo Knopfer - Course - AGISF
- 2022 Q1 - Vinay Hiremath - Course - AGISF
- 2021 Q4 - Charlie Steiner - Post (EA/LW/AF) - Reducing Goodhart sequence (115; 71)
- 2021 Q4 - Jaeson Booker - Code - AI Policy Simulator
- 2021 Q4 - Michele Campolo - Post (EA/LW/AF) - From language to ethics by automated reasoning (8; 2)
- 2021 Q3 - Michele Campolo - Event (attendance) - Attended Good AI Badger Seminar: Beyond Life-long Learning via Modular Meta-Learning
- 2021 Q2 - Michele Campolo - Post (EA/LW/AF) - Naturalism and AI alignment (17; 5)
- 2021 Q2 - Quinn Dougherty - Event (attendance) - AI Safety Camp
- 2021 Q2 - Quinn Dougherty - Placement - Internship - Obtained internship at SERI
- 2021 Q2 - Quinn Dougherty - Podcast - Multi-agent Reinforcement Learning in Sequential Social Dilemmas
- 2021 Q2 - Quinn Dougherty - Post (EA/LW/AF) - High Impact Careers in Formal Verification: Artificial Intelligence (28)
- 2021 Q2 - Samuel Knoche - Code - Creating code [1%]
- 2021 Q1 - Michele Campolo - Post (EA/LW/AF) - Contribution to Literature Review on Goal-Directedness [20%] (71; 32)
- 2021 Q1 - Quinn Dougherty - Podcast - Technical AI Safety Podcast Episode 2
- 2021 Q1 - Quinn Dougherty - Podcast - Technical AI Safety Podcast
- 2021 Q1 - Quinn Dougherty - Podcast - Technical AI Safety Podcast Episode 1
- 2021 Q1 - Quinn Dougherty - Podcast - Technical AI Safety Podcast Episode 3
- 2020 Q3 - Luminita Bogatean - Course - enrolled in the Open University’s bachelor degree Computing & IT and Design [20%]
- 2020 Q3 - Michele Campolo - Post (EA/LW/AF) - Decision Theory is multifaceted [25%] (9; 7)
- 2020 Q3 - Michele Campolo - Post (EA/LW/AF) - Goals and short descriptions [25%] (14; 7)
- 2020 Q3 - Michele Campolo - Post (EA/LW/AF) - Postponing research can sometimes be the optimal decision [25%] (29)
- 2020 Q2 - Michele Campolo - Post (EA/LW/AF) - Contributions to the sequence Thoughts on Goal-Directedness [25%] (97; 44)
- 2020 Q1 - Michele Campolo - Post (EA/LW/AF) - Wireheading and discontinuity [25%] (21; 11)
- 2019 Q4 - Anonymous 2 - Post (EA/LW/AF) - Some Comments on “Goodhart Taxonomy” [1%] (9; 4)
- 2019 Q4 - Anonymous 2 - Post (EA/LW/AF) - What are we assuming about utility functions? [1%] (17; 9)
- 2019 Q4 - Anonymous 2 - Post (EA/LW/AF) - Critiquing “What failure looks like” (featured in MIRI’s Jan 2020 Newsletter) [1%] (35; 17)
- 2019 Q4 - Anonymous 2 - Post (EA/LW/AF) - 8 AIS ideas [1%]
- 2019 Q4 - Linda Linsefors - Event - Organisation - Organized the AI Safety Learning By Doing Workshop (August and October 2019)
- 2019 Q4 - Luminita Bogatean - Course - Course: Python Programming: A Concise Introduction [20%]
- 2019 Q4 - Michele Campolo - Post (EA/LW/AF) - Thinking of tool AIs [0%] (6)
- 2019 Q4 - [multiple people] - Event - AI Safety - AI Safety Learning By Doing Workshop (October 2019)
- 2019 Q4 - [multiple people] - Event - X-risk - AI Strategy and X-Risk Unconference (AIXSU)
- 2019 Q4 - Rafe Kennedy - Placement - Job - A job at the Machine Intelligence Research Institute (MIRI) (following a 3 month work trial)
- 2019 Q4 - Samuel Knoche - Code - Code for Style Transfer, Deep Dream and Pix2Pix implementation [5%]
- 2019 Q4 - Samuel Knoche - Code - NLP implementations [5%]
- 2019 Q4 - Samuel Knoche - Code - Code for lightweight Python deep learning library [5%]
- 2019 Q3 - Anonymous 2 - Post (EA/LW/AF) - Cognitive Dissonance and Veg*nism (7)
- 2019 Q3 - Anonymous 2 - Post (EA/LW/AF) - Non-anthropically, what makes us think human-level intelligence is possible? (9)
- 2019 Q3 - Anonymous 2 - Post (EA/LW/AF) - What are concrete examples of potential “lock-in” in AI research? (17; 11)
- 2019 Q3 - Anonymous 2 - Post (EA/LW/AF) - Distance Functions are Hard (31; 11)
- 2019 Q3 - Anonymous 2 - Post (EA/LW/AF) - The Moral Circle is not a Circle (38)
- 2019 Q3 - Davide Zagami - Paper (Published) - Coauthored the paper Categorizing Wireheading in Partially Embedded Agents, and presented a poster at the AI Safety Workshop in IJCAI 2019 [15%]
- 2019 Q3 - Linda Linsefors - Event - Organisation - Organized the AI Safety Technical Unconference (August 2019) (retrospective written by a participant) (36)
- 2019 Q3 - Linda Linsefors - Event - Organisation - Organized the AI Safety Learning By Doing Workshop (August and October 2019)
- 2019 Q3 - Linda Linsefors - Statement - “I think the biggest impact EA Hotel did for me, was about self growth. I got a lot of help to improve, but also the time and freedom to explore. I tried some projects that did not lead anywhere, like Code to Give. But getting to explore was necessary for me to figure out what to do. I finally landed on organising, which I’m still doing. AI Safety Support probably would not have existed with out the hotel.” [0%]
- 2019 Q3 - [multiple people] - Event - AI Safety - AI Safety Learning By Doing Workshop (August 2019)
- 2019 Q3 - [multiple people] - Event - AI Safety - AI Safety Technical Unconference (August 2019) (retrospective written by a participant)
- 2019 Q1 - Anonymous 1 - Course - MITx Probability
- 2019 Q1 - Anonymous 1 - Course - Model Thinking
- 2019 Q1 - Anonymous 1 - Course - Probabilistic Graphical Models
- 2019 Q1 - Chris Leong - Post (EA/LW/AF) - Deconfusing Logical Counterfactuals [75%] (27; 6)
- 2019 Q1 - Chris Leong - Post (EA/LW/AF) - Debate AI and the Decision to Release an AI [90%] (9)
- 2019 Q1 - Davide Zagami - Post (EA/LW/AF) - RAISE lessons on Inverse Reinforcement Learning + their Supplementary Material [1%] (20)
- 2019 Q1 - Davide Zagami - Post (EA/LW/AF) - RAISE lessons on Fundamentals of Formalization [5%] (28)
- 2019 Q1 - John Maxwell - Course - MITx Probability
- 2019 Q1 - John Maxwell - Course - Improving Your Statistical Inferences (21 hours)
- 2019 Q1 - John Maxwell - Course - Statistical Learning
- 2019 Q1 - Linda Linsefors - Post (EA/LW/AF) - The Game Theory of Blackmail (“I don’t remember where the ideas behind this post came from, so it is hard for me to say what the counterfactual would have been. However, I did get help improving the post from other residents, so it would at least be less well written without the hotel.“) (25; 6)
- 2019 Q1 - Linda Linsefors - Post (EA/LW/AF) - Optimization Regularization through Time Penalty (“This post resulted from conversations at the EA Hotel [CEEALAR] and would not therefore not have happened without the hotel.”) [0%] (11; 7)
- 2019 Q1 - RAISE - Course Module produced - Nearly the entirety of this online course was created by grantees
- 2019 Q1 - RAISE - Course Module produced - Nearly the entirety of this online course was created by grantees
- 2019 Q1 - RAISE - Course Module produced - Nearly the entirety of this online course was created by grantees
- 2018 Q4 - Anonymous 1 - Post (EA/LW/AF) - Believing others’ priors (8)
- 2018 Q4 - Anonymous 1 - Post (EA/LW/AF) - AI development incentive gradients are not uniformly terrible (21)
- 2018 Q4 - Anonymous 1 - Post (EA/LW/AF) - Should donor lottery winners write reports? (29)
- 2018 Q4 - Chris Leong - Post (EA/LW/AF) - On Abstract Systems [50%] (14)
- 2018 Q4 - Chris Leong - Post (EA/LW/AF) - On Disingenuity [50%] (28)
- 2018 Q4 - Chris Leong - Post (EA/LW/AF) - Summary: Surreal Decisions [50%] (29)
- 2018 Q4 - Chris Leong - Post (EA/LW/AF) - An Extensive Categorisation of Infinite Paradoxes [80%] [80%] (-14)
- 2018 Q4 - John Maxwell - Course - ARIMA Modeling with R
- 2018 Q4 - John Maxwell - Course - Introduction to Recommender Systems (20-48 hours)
- 2018 Q4 - John Maxwell - Course - Formal Software Verification
- 2018 Q4 - John Maxwell - Course - Text Mining and Analytics
- 2018 Q3 - Anonymous 1 - Post (EA/LW/AF) - Annihilating aliens & Rare Earth suggest early filter (9)
- 2018 Q3 - John Maxwell - Course - Introduction to Time Series Analysis
- 2018 Q3 - John Maxwell - Course - Regression Models
Animal Welfare
- 2023 Q2 - Ramika Prajapati - Project - Signed a contract to author a Scientific Policy Report for Rethink Priorities' Insect Welfare Institute, to be presented to the UK Government in 2023 [20%]
- 2020 Q3 - Rhys Southan - Placement - PhD - Applied to a number of PhD programs in Philosophy and took up a place at Oxford University (Researching “Personal Identity, Value, and Ethics for Animals and AIs”) [40%]
- 2020 Q1 - Rhys Southan - Other writing - Talk - Accepted to give a talk and a poster to the academic session of EAG 2020 in San Francisco
- 2019 Q4 - Rhys Southan - Paper (Submitted) - Wrote an academic philosophy essay about a problem for David Benatar’s pessimism about life and death, and submitted it to an academic journal. [10%]
- 2019 Q4 - Rhys Southan - Placement - Job - “I got a paid job writing an index for a book by a well-known moral philosopher. This job will help me continue to financially contribute to the EA Hotel [CEEALAR].” [20%]
- 2019 Q3 - Magnus Vinding - Idea - “I got the idea to write the book I’m currently writing (“Suffering-Focused Ethics”)”. [50%]
- 2019 Q3 - Magnus Vinding - Paper (Revising) - Revising journal paper for Between the Species. (“Got feedback and discussion about it I couldn’t have had otherwise; one reviewer happened to be a guest at the hotel [CEEALAR].”)
- 2019 Q3 - Max Carpendale - Post (EA/LW/AF) - Interview with Michael Tye about invertebrate consciousness [50%] (32)
- 2019 Q3 - Max Carpendale - Post (EA/LW/AF) - My recommendations for gratitude exercises [50%] (40)
- 2019 Q3 - Nix Goldowsky-Dill - Other writing - Comment - EA Forum Comment Prize ($50), July 2019, for “comments on the impact of corporate cage-free campaigns” (11)
- 2019 Q3 - Rhys Southan - Other writing - Essay - Published an essay, Re-Orientation, about some of the possible personal and societal implications of sexual orientation conversion drugs that actually work [90%]
- 2019 Q3 - Rhys Southan - Placement - Job - Edited and partially rewrote a book on meat, treatment of farmed animals, and alternatives to factory farming (as a paid job [can’t yet name the book or its author due to non-disclosure agreement]) [70%]
- 2019 Q2 - Max Carpendale - Post (EA/LW/AF) - My recommendations for RSI treatment [25%] (77)
- 2019 Q2 - Max Carpendale - Post (EA/LW/AF) - Thoughts on the welfare of farmed insects [50%] (34)
- 2019 Q2 - Max Carpendale - Post (EA/LW/AF) - Interview with Shelley Adamo about invertebrate consciousness [50%] (37)
- 2019 Q2 - Max Carpendale - Post (EA/LW/AF) - Interview with Jon Mallatt about invertebrate consciousness (winner of 1st place EA Forum Prize for Apr 2019) [50%] (83)
- 2019 Q1 - Max Carpendale - Post (EA/LW/AF) - Sharks probably do feel pain: a reply to Michael Tye and others [50%] (21)
- 2019 Q1 - Max Carpendale - Post (EA/LW/AF) - The Evolution of Sentience as a Factor in the Cambrian Explosion: Setting up the Question [50%] (30)
- 2019 Q1 - Saulius Šimčikas - Post (EA/LW/AF) - Will companies meet their animal welfare commitments? (winner of 3rd place EA Forum Prize for Feb 2019) [96%] (115)
- 2019 Q1 - Saulius Šimčikas - Post (EA/LW/AF) - Rodents farmed for pet snake food [99%] (75)
- 2018 Q4 - Frederik Bechtold - Placement - Internship - Received an (unpaid) internship at Animal Ethics [1%]
- 2018 Q4 - Max Carpendale - Post (EA/LW/AF) - Why I’m focusing on invertebrate sentience [75%] (56)
- 2018 Q3 - Magnus Vinding - Other writing - Essay - Why Altruists Should Perhaps Not Prioritize Artificial Intelligence: A Lengthy Critique [99%]
- 2018 Q3 - Max Carpendale - Placement - Job - Got a research position (part-time) at Animal Ethics [25%]
Global Health and Development
- 2021 Q3 - Nick Stares - Course - CORE The Economy
- 2020 Q3 - Derek Foster - Paper (Preprint) - Option-based guarantees to accelerate urgent, high risk vaccines: A new market-shaping approach [Preprint] [70%]
- 2020 Q3 - Derek Foster - Paper (Published) - Evaluating use cases for human challenge trials in accelerating SARS-CoV-2 vaccine development. Clinical Infectious Diseases. [70%]
- 2020 Q3 - Kris Gulati - Course - Audited M208 (Pure Maths) Linear Algebra and Real Analysis, The Open University
- 2020 Q3 - Kris Gulati - Statement - “All together I spent approximately 9/10 months in total at the Hotel [CEEALAR] (I had appendicitis and had a few breaks during my stay). The time at the Hotel was incredibly valuable to me. I completed the first year of a Maths degree via The Open University (with Distinction). On top of this, I self-studied Maths and Statistics (a mixture of Open University and MIT Opencourseware resources), covering pre-calculus, single-variable calculus, multivariable calculus, linear algebra, real analysis, probability theory, and statistical theory/applied statistics. This provided me with the mathematics/statistics knowledge to complete the coursework components at top-tier Economics PhD programmes.The Hotel [CEEALAR] also gave me the time to apply for PhD programmes. Sadly, I didn’t succeed in obtaining scholarships for my target school – The London School of Economics. However, I did receive a fully funded offer to study a two-year MRes in Economics at The University of Glasgow. Conditional upon doing well at Glasgow, the two-year MRes enables me to apply to top-tier PhD programmes afterwards. During my stay, I worked on some academic research (my MSc thesis, and an old anthropology paper), which will help my later PhD applications. I applied for a variety of large grants at OpenPhil and other EA organisations (which weren’t successful). I also applied to a fellowship at Wasteland Research (I reached the final round), which I couldn’t follow up on due to other work commitments (although I hope to apply in the future). Finally, I developed a few research ideas while at the Hotel. I’m now working on obtaining data socio-economic data on academic Economists. I’m also planning on running/hosting an experiment that tries to find the most convincing argument for long-termism. These ideas were conceived at the Hotel and I received a lot of feedback/help from current and previous residents. Counterfactually – if I wasn’t at the Hotel [CEEALAR] – I would have probably only been able to complete half of the Maths/Stats I learned. I probably wouldn’t have applied to any of the scholarships/grants/fellowships because I heard about them via residents at the Hotel. I also probably wouldn’t have had time to focus on completing my older research papers. Similarly, discussions with other residents spurred the new research ideas I’m working on.” [0%]
- 2020 Q2 - Derek Foster - Other writing - Project - Some of the content of https://1daysooner.org/ [70%]
- 2020 Q2 - Derek Foster - Other writing - Project - Various unpublished documents for the Happier Lives Institute [80%]
- 2020 Q2 - Derek Foster - Other writing - Report - Some confidential COVID-19-related policy reports [70%]
- 2020 Q2 - Derek Foster - Paper (Preprint) - Modelling the Health and Economic Impacts of Population-Wide Testing, Contact Tracing and Isolation (PTTI) Strategies for COVID-19 in the UK [Preprint]
- 2020 Q2 - Derek Foster - Post (EA/LW/AF) - Pueyo: How to Do Testing and Contact Tracing [Summary] [70%] (7)
- 2020 Q2 - Derek Foster - Post (EA/LW/AF) - Market-shaping approaches to accelerate COVID-19 response: a role for option-based guarantees? [70%] (38)
- 2020 Q2 - Derek Foster - Project - Parts of the Survey of COVID-19 Responses to Understand Behaviour (SCRUB)
- 2020 Q2 - Kris Gulati - Placement - PhD - Applied to a number of PhD programmes in Economics, and took up a place at Glasgow University
- 2020 Q1 - Derek Foster - Other writing - Project - A confidential evaluation of an anxiety app [90%]
- 2020 Q1 - Kris Gulati - Course - Distinctions in M140 (Statistics), The Open University
- 2020 Q1 - Kris Gulati - Course - Distinctions in MST125 (Mathematics), The Open University
- 2019 Q4 - Anders Huitfeldt - Paper (Published) - Scientific Article: Huitfeldt, A., Swanson, S. A., Stensrud, M. J., & Suzuki, E. (2019). Effect heterogeneity and variable selection for standardizing causal effects to a target population. European Journal of Epidemiology.
- 2019 Q4 - Anders Huitfeldt - Post (EA/LW/AF) - Post on EA Forum: Effect heterogeneity and external validity (6)
- 2019 Q4 - Anders Huitfeldt - Post (EA/LW/AF) - Post on LessWrong: Effect heterogeneity and external validity in medicine (49)
- 2019 Q4 - Kris Gulati - Course - Completed MA100 (Mathematical Methods)[auditing module], London School of Economics
- 2019 Q4 - Kris Gulati - Course - Completed GV100 (Intro to Political Theory)[auditing module], London School of Economics
- 2019 Q4 - Kris Gulati - Course - Completed ‘Justice’ (Harvard MOOC; Verified Certificate)
- 2019 Q3 - Derek Foster - Post (EA/LW/AF) - Rethink Grants: an evaluation of Donational’s Corporate Ambassador Program [95%] (54)
- 2019 Q3 - Kris Gulati - Course - Distinction in MU123 (Mathematics), The Open University
- 2019 Q3 - Kris Gulati - Course - Distinctions in MST124 (Mathematics), The Open University
- 2019 Q1 - Derek Foster - Other writing - Book Chapter - Priority Setting in Healthcare Through the Lens of Happiness – Chapter 3 of the 2019 Global Happiness & Wellbeing Policy Report [99%]
- 2018 Q4 - Derek Foster - Placement - Job - Hired as a research analyst for Rethink Priorities [95%]
Meta or Community Building
- 2023 Q2 - Daniela Tiznado - Project - 3 EA women collaborated at CEEALAR to co-work on building a global support network for EA Women [10%]
- 2023 Q2 - [multiple people] - Event - Retreat - Hosted ALLFED team retreat
- 2023 Q2 - Onicah Ntswejakgosi - Project - 3 EA women collaborated at CEEALAR to co-work on building a global support network for EA Women [10%]
- 2023 Q2 - Ramika Prajapati - Project - 3 EA women collaborated at CEEALAR to co-work on building a global support network for EA Women [10%]
- 2023 Q1 - Can Rager - Event - AI Safety - Conducted online training for software engineering and mechanistic interpretability
- 2023 Q1 - Chris Leong - Project - Worked together with Abram Demski on an Adversarial Collaboration project
- 2023 Q1 - Hamish Huggard - Project - Built and launched https://aisafety.world/
- 2023 Q1 - Hamish Huggard - Project - Video to help an EANZ fundraiser for GiveDirectly
- 2023 Q1 - Jaeson Booker - Project - Created AI Safety Strategy Group and GitHub Project Board [80%]
- 2022 Q4 - CEEALAR - Statement - We had little in the way of concrete outputs this quarter due to diminished numbers (building maintenance)
- 2022 Q2 - Laura C - Placement - Job - Becoming Operations Lead for The Berlin Longtermist Hub [50%]
- 2022 Q2 - Luminita Bogatean - Course - First out of three courses in the UX Programme by CareerFoundry
- 2022 Q1 - Denisa Pop - Project - $9,000 grant from the Effective Altruism Infrastructure Fund to organize group activities for improving self-awareness and communication in EAs [50%]
- 2022 Q1 - Severin Seehrich - Event - Organisation - Organised Summer Solstice Celebration event for EAs
- 2022 Q1 - Severin Seehrich - Post (EA/LW/AF) - The Berlin Hub: Longtermist co-living space (plan) (84)
- 2022 Q1 - Severin Seehrich - Project - Designed The Berlin Longtermist Hub
- 2022 Q1 - Simmo Simpson - Placement - Job - Job placement as Executive Assistant to COO of Alvea.
- 2022 Q1 - Vinay Hiremath - Project - Built website for SERI conference
- 2021 Q4 - Aaron Maiwald - Podcast - 70% of the work for the Gutes Einfach Tun Podcast Episodes : 3, 4 ,5 and 6
- 2021 Q4 - Simmo Simpson - Course - Became a Association of Coaching accredited coach
- 2021 Q3 - Aaron Maiwald - Podcast - Gutes Einfach Tun Podcast lauch & Episode 1
- 2021 Q3 - Aaron Maiwald - Podcast - Gutes Einfach Tun Podcast lauch & Episode 2
- 2021 Q2 - Anonymous - Placement - Job - Obtained job for the EA org Momentum
- 2021 Q2 - Denisa Pop - Course - Introduction into Group Facilitation
- 2021 Q2 - Jack Harley - Project - Created Longevity wiki and expanded the team to 8 members
- 2021 Q2 - Quinn Dougherty - Event - Organisation - organising an event for AI Philadelphia, inviting anti-aging expert Jack Harley
- 2021 Q2 - Quinn Dougherty - Post (EA/LW/AF) - Cliffnotes to Craft of Research parts I, II, and III (7)
- 2021 Q2 - Quinn Dougherty - Post (EA/LW/AF) - What am I fighting for? (12)
- 2020 Q4 - CEEALAR - Statement - We had little in the way of concrete outputs this quarter due to diminished numbers (pandemic lockdown)
- 2020 Q3 - Denisa Pop - Placement - Internship - Incubatee and graduate of Charity Entrepreneurship 2020 Incubation Program [50%]
- 2020 Q3 - Samuel Knoche - Other writing - Blog - Blogs I’ve Been Reading [1%]
- 2020 Q3 - Samuel Knoche - Other writing - Blog - Books I’ve Been Reading [1%]
- 2020 Q3 - Samuel Knoche - Other writing - Blog - List of Peter Thiel’s Online Writings [5%]
- 2020 Q3 - Samuel Knoche - Post (EA/LW/AF) - The Best Educational Institution in the World [1%] (12)
- 2020 Q3 - Samuel Knoche - Post (EA/LW/AF) - The Case for Education [1%] (23)
- 2020 Q2 - Denisa Pop - Placement - Internship - Interned as a mental health research analyst at Charity Entrepreneurship [50%]
- 2020 Q2 - Luminita Bogatean - Placement - Job - Becoming Operations Manager at CEEALAR
- 2020 Q2 - Samuel Knoche - Other writing - Blog - Students are Employees [1%]
- 2020 Q2 - Samuel Knoche - Other writing - Blog - My Quarantine Reading List [1%]
- 2020 Q2 - Samuel Knoche - Other writing - Blog - The End of Education [1%]
- 2020 Q2 - Samuel Knoche - Other writing - Blog - How Steven Pinker Could Become Really Rich [1%]
- 2020 Q2 - Samuel Knoche - Other writing - Blog - The Public Good of Education [1%]
- 2020 Q2 - Samuel Knoche - Other writing - Blog - Trillion Dollar Bills on the Sidewalk [1%]
- 2020 Q2 - Samuel Knoche - Other writing - Blog - On Schools and Churches [1%]
- 2020 Q2 - Samuel Knoche - Other writing - Blog - Questions to Guide Life and Learning [1%]
- 2020 Q2 - Samuel Knoche - Other writing - Blog - Online Standardized Tests Are a Bad Idea [1%]
- 2020 Q2 - Samuel Knoche - Other writing - Blog - Harari on Religions and Ideologies [1%]
- 2020 Q2 - Samuel Knoche - Other writing - Blog - The Single Best Policy to Combat Climate Change [1%]
- 2020 Q2 - Samuel Knoche - Post (EA/LW/AF) - Patrick Collison on Effective Altruism [1%] (97)
- 2020 Q1 - Samuel Knoche - Other writing - Blog - What I Learned Dropping Out of High School [1%]
- 2019 Q4 - Samuel Knoche - Code - Lottery Ticket Hypothesis [5%]
- 2019 Q2 - Denisa Pop - Other writing - Talk - Researched and developing presentations and workshops in Rational Compassion: see How we might save the world by becoming super-dogs [0%]
- 2019 Q2 - Denisa Pop - Project - Becoming Interim Community & Projects Manager at CEEALAR and offering residents counseling/coaching sessions (productivity & mental health) [0%]
- 2019 Q2 - Matt Goldenberg - Event - Organisation - Organizer and instructor for the Athena Rationality Workshop (June 2019)
- 2019 Q2 - [multiple people] - Event - Rationality Workshop - Athena Rationality Workshop (June 2019) (retrospective) (24)
- 2019 Q1 - Denisa Pop - Event - Organisation - Helped organise the EA Values-to-Actions Retreat [33%]
- 2019 Q1 - Denisa Pop - Event - Organisation - Helped organise the EA Community Health Unconference [33%]
- 2019 Q1 - Matt Goldenberg - Code - The entirety of Project Metis [5%]
- 2019 Q1 - Matt Goldenberg - Post (EA/LW/AF) - What Vibing Feels Like [5%] (19)
- 2019 Q1 - Matt Goldenberg - Post (EA/LW/AF) - A Framework for Internal Debugging [5%] (41)
- 2019 Q1 - Matt Goldenberg - Post (EA/LW/AF) - How to Understand and Mitigate Risk [5%] (55)
- 2019 Q1 - Matt Goldenberg - Post (EA/LW/AF) - S-Curves for Trend Forecasting [5%] (112)
- 2019 Q1 - Matt Goldenberg - Post (EA/LW/AF) - The 3 Books Technique for Learning a New Skill [5%] (193)
- 2019 Q1 - [multiple people] - Event - Retreat - EA Glasgow (March 2019)
- 2019 Q1 - Toon Alfrink - Post (EA/LW/AF) - EA is vetting-constrained [10%] (129)
- 2019 Q1 - Toon Alfrink - Post (EA/LW/AF) - Task Y: representing EA in your field [90%] (11)
- 2019 Q1 - Toon Alfrink - Post (EA/LW/AF) - What makes a good culture? [90%] (29)
- 2019 Q1 - Toon Alfrink - Post (EA/LW/AF) - The Home Base of EA [90%]
- 2018 Q4 - Toon Alfrink - Post (EA/LW/AF) - The housekeeper [10%] (23)
- 2018 Q4 - Toon Alfrink - Post (EA/LW/AF) - We can all be high status [10%] (62)
- 2018 Q3 - [multiple people] - Event - Retreat - EA London Retreats: Life Review Weekend (Aug. 24th – 27th 2018); Careers Week (Aug. 27th – 31st 2018); Holiday/EA Unconference (Aug. 31st – Sept. 3rd 2018)
X-Risks
- 2023 Q2 - Jaeson Booker - Other writing - Blog - Moloch, Disaster Monkeys, and Gremlins- Blog Post to AI-safety, X-risks
- 2023 Q2 - Jaeson Booker - Other writing - Blog - The Amoral God-King- Blog Post to AI-safety, X-risks
- 2023 Q2 - Jaeson Booker - Other writing - Blog - The Three Heads of the Dragon- Blog Post to AI-safety, X-risks
- 2022 Q2 - Theo Knopfer - Course - Accepted into CHERI Summer Fellowship
- 2022 Q2 - Theo Knopfer - Event - Organisation - A grant from CEA to organise retreats on x-risk in France
- 2022 Q1 - Theo Knopfer - Course - https://www.trainingforgood.com/policy-careers-europe
- 2022 Q1 - Theo Knopfer - Course - Weapons of Mass Destruction by IEGA
- 2021 Q2 - Quinn Dougherty - Post (EA/LW/AF) - Transcript: EA Philly's Infodemics Event Part 2: Aviv Ovadya (10)
- 2020 Q3 - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- 2020 Q3 - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- 2020 Q3 - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- 2020 Q3 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q3 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q3 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q3 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q3 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q3 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q2 - Aron Mill - Other writing - Project - Helped Launch the Food Systems Handbook (announcement) (30)
- 2020 Q2 - Aron Mill - Post (EA/LW/AF) - Food Crisis - Cascading Events from COVID-19 & Locusts (97)
- 2020 Q2 - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- 2020 Q2 - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- 2020 Q2 - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- 2020 Q2 - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- 2020 Q2 - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- 2020 Q2 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q2 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q2 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q2 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q2 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q2 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q1 - David Kristoffersson - Post (EA/LW/AF) - State Space of X-Risk Trajectories [95%] (24)
- 2020 Q1 - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- 2020 Q1 - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- 2020 Q1 - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- 2020 Q1 - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- 2020 Q1 - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- 2020 Q1 - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- 2020 Q1 - David Kristoffersson - Post (EA/LW/AF) - The ‘far future’ is not just the far future [99%] (30)
- 2020 Q1 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q1 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q1 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q1 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q1 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q1 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q1 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q1 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q1 - Michael Aird - Post (EA/LW/AF) - Four components of strategy research [50%] (26)
- 2020 Q1 - Michael Aird - Post (EA/LW/AF) - Using vector fields to visualise preferences and make them consistent [90%] (41; 8)
- 2020 Q1 - Michael Aird - Post (EA/LW/AF) - Value uncertainty [95%] (19)
- 2019 Q4 - David Kristoffersson - Event - Organisation - Organised AI Strategy and X-Risk Unconference (AIXSU) [1%] (5)
- 2019 Q4 - Markus Salmela - Paper (Published) - Joined the design team for the upcoming AI Strategy role-playing game Intelligence Rising and organised a series of events for testing the game [15%]
- 2019 Q3 - David Kristoffersson - Project - Applied for 501c3 non-profit status for Convergence [non-profit status approved in 2019] [95%]
- 2019 Q3 - Justin Shovelain - Project - Got non-profit status for Convergence Analysis and established it legally [90%]
- 2019 Q1 - David Kristoffersson - Other writing - Talk - Designed Convergence presentation (slides, notes) and held it at the Future of Humanity Institute [80%]
- 2019 Q1 - David Kristoffersson - Project - Defined a recruitment plan for a researcher-writer role and publicized a job ad [90%]
- 2019 Q1 - David Kristoffersson - Project - Built new website for Convergence [90%]
- 2019 Q1 - Markus Salmela - Paper (Published) - Coauthored the paper Long-Term Trajectories of Human Civilization [99%]
- 2018 Q4 - David Kristoffersson - Project - Incorporated Convergence [95%]
Key:
- Cause Area - Name of Grantee - Output Type - Title with link[C%*] (K**)
*C% = percentage counterfactual likelihood of happening without CEEALAR.
**K = Karma on EA Forum, Less Wrong, (Less Wrong; Alignment Forum).
2023 Q2
- AI Alignment - Can Rager - Post (EA/LW/AF) - Written a post on Understanding mesa-optimisation using toy models (37)
- AI Alignment - Chris Leong - Course - Got accepted to the online training + research sprint component of John Wentworth's SERI MATS stream
- AI Alignment - Jaeson Booker - Course - Got accepted to the online training + research sprint component of John Wentworth's SERI MATS stream
- AI Alignment - Jaeson Booker - Other writing - Review - Reviewed Policy Report to AIS-safety
- AI Alignment - Michele Campolo - Course - Got accepted to the online training + research sprint component of John Wentworth's SERI MATS stream
- AI Alignment - Nia Gardner - Event - Organisation - Organised an ML Bootcamp in Germany
- Animal Welfare - Ramika Prajapati - Project - Signed a contract to author a Scientific Policy Report for Rethink Priorities' Insect Welfare Institute, to be presented to the UK Government in 2023 [20%]
- Meta or Community Building - Daniela Tiznado - Project - 3 EA women collaborated at CEEALAR to co-work on building a global support network for EA Women [10%]
- Meta or Community Building - [multiple people] - Event - Retreat - Hosted ALLFED team retreat
- Meta or Community Building - Onicah Ntswejakgosi - Project - 3 EA women collaborated at CEEALAR to co-work on building a global support network for EA Women [10%]
- Meta or Community Building - Ramika Prajapati - Project - 3 EA women collaborated at CEEALAR to co-work on building a global support network for EA Women [10%]
- X-Risks - Jaeson Booker - Other writing - Blog - Moloch, Disaster Monkeys, and Gremlins- Blog Post to AI-safety, X-risks
- X-Risks - Jaeson Booker - Other writing - Blog - The Amoral God-King- Blog Post to AI-safety, X-risks
- X-Risks - Jaeson Booker - Other writing - Blog - The Three Heads of the Dragon- Blog Post to AI-safety, X-risks
2023 Q1
- AI Alignment - Can Rager - Code - Prepared for and conducted the coding test for the ARENA programme
- AI Alignment - Can Rager - Other writing - Report - Developed a research proposal
- AI Alignment - Guillaume Corlouer - Course - Got selected at the PIBBSS Fellowship
- AI Alignment - Hamish Huggard - Project - Video on AI safety
- AI Alignment - Hamish Huggard - Project - Animated the AXRP video "How do neural networks do modular addition?"
- AI Alignment - Hamish Huggard - Project - Animated the AXRP video "Vanessa Kosoy on the Monotonicity Principle"
- AI Alignment - Hamish Huggard - Project - Animated the AXRP video "What is mechanistic interpretability? Neel Nanda explains"
- AI Alignment - Jaeson Booker - Other writing - Book Chapter - Completed authoring two-third of the AI Safety handbook [70%]
- AI Alignment - Jaeson Booker - Project - Created AI Safety Strategy Group' Website
- AI Alignment - Michele Campolo - Post (EA/LW/AF) - On value in humans, other animals, and AI (7; 3)
- AI Alignment - Vinay Hiremath - Placement - Job - Selected as a Teaching Assistant for the ML for Alignment Bootcamp
- Meta or Community Building - Can Rager - Event - AI Safety - Conducted online training for software engineering and mechanistic interpretability
- Meta or Community Building - Chris Leong - Project - Worked together with Abram Demski on an Adversarial Collaboration project
- Meta or Community Building - Hamish Huggard - Project - Built and launched https://aisafety.world/
- Meta or Community Building - Hamish Huggard - Project - Video to help an EANZ fundraiser for GiveDirectly
- Meta or Community Building - Jaeson Booker - Project - Created AI Safety Strategy Group and GitHub Project Board [80%]
2022 Q4
- Meta or Community Building - CEEALAR - Statement - We had little in the way of concrete outputs this quarter due to diminished numbers (building maintenance)
2022 Q3
- AI Alignment - David Matlosci - Course - got selected to the second phase of SERI MATS and went to California to participate in it
2022 Q2
- AI Alignment - Aaron Maiwald - Course - courses on mathematics part of his bachelor's degree in Cognitive Science
- AI Alignment - Aaron Maiwald - Course - courses on ML part of his bachelor's degree in Cognitive Science
- AI Alignment - Lucas Teixeira - Placement - Internship - Internship at Conjecture
- AI Alignment - Michele Campolo - Post (EA/LW/AF) - Some alternative AI safety research projects [10%] (8; 2)
- AI Alignment - Samuel Knoche - Placement - Job - Becoming a Contractor for Open AI
- AI Alignment - Vinay Hiremath - Placement - Job - Job placement as Teaching Assistant for MLAB (Machine Learning for Alignment Bootcamp)
- Meta or Community Building - Laura C - Placement - Job - Becoming Operations Lead for The Berlin Longtermist Hub [50%]
- Meta or Community Building - Luminita Bogatean - Course - First out of three courses in the UX Programme by CareerFoundry
- X-Risks - Theo Knopfer - Course - Accepted into CHERI Summer Fellowship
- X-Risks - Theo Knopfer - Event - Organisation - A grant from CEA to organise retreats on x-risk in France
2022 Q1
- AI Alignment - David King - Course - AGISF
- AI Alignment - Jaeson Booker - Course - AGISF
- AI Alignment - Michele Campolo - Course - Attended online course on moral psychology by the University of Warwick.
- AI Alignment - Peter Barnett - Placement - Internship - Accepted for an internship at CHAI
- AI Alignment - Peter Barnett - Placement - Internship - Accepted into SERI ML Alignment Theory Scholars program
- AI Alignment - Peter Barnett - Post (EA/LW/AF) - Thoughts on Dangerous Learned Optimization (4)
- AI Alignment - Peter Barnett - Post (EA/LW/AF) - Alignment Problems All the Way Down (26)
- AI Alignment - Theo Knopfer - Course - AGISF
- AI Alignment - Vinay Hiremath - Course - AGISF
- Meta or Community Building - Denisa Pop - Project - $9,000 grant from the Effective Altruism Infrastructure Fund to organize group activities for improving self-awareness and communication in EAs [50%]
- Meta or Community Building - Severin Seehrich - Event - Organisation - Organised Summer Solstice Celebration event for EAs
- Meta or Community Building - Severin Seehrich - Post (EA/LW/AF) - The Berlin Hub: Longtermist co-living space (plan) (84)
- Meta or Community Building - Severin Seehrich - Project - Designed The Berlin Longtermist Hub
- Meta or Community Building - Simmo Simpson - Placement - Job - Job placement as Executive Assistant to COO of Alvea.
- Meta or Community Building - Vinay Hiremath - Project - Built website for SERI conference
- X-Risks - Theo Knopfer - Course - https://www.trainingforgood.com/policy-careers-europe
- X-Risks - Theo Knopfer - Course - Weapons of Mass Destruction by IEGA
2021 Q4
- AI Alignment - Charlie Steiner - Post (EA/LW/AF) - Reducing Goodhart sequence (115; 71)
- AI Alignment - Jaeson Booker - Code - AI Policy Simulator
- AI Alignment - Michele Campolo - Post (EA/LW/AF) - From language to ethics by automated reasoning (8; 2)
- Meta or Community Building - Aaron Maiwald - Podcast - 70% of the work for the Gutes Einfach Tun Podcast Episodes : 3, 4 ,5 and 6
- Meta or Community Building - Simmo Simpson - Course - Became a Association of Coaching accredited coach
2021 Q3
- AI Alignment - Michele Campolo - Event (attendance) - Attended Good AI Badger Seminar: Beyond Life-long Learning via Modular Meta-Learning
- Global Health and Development - Nick Stares - Course - CORE The Economy
- Meta or Community Building - Aaron Maiwald - Podcast - Gutes Einfach Tun Podcast lauch & Episode 1
- Meta or Community Building - Aaron Maiwald - Podcast - Gutes Einfach Tun Podcast lauch & Episode 2
2021 Q2
- AI Alignment - Michele Campolo - Post (EA/LW/AF) - Naturalism and AI alignment (17; 5)
- AI Alignment - Quinn Dougherty - Event (attendance) - AI Safety Camp
- AI Alignment - Quinn Dougherty - Placement - Internship - Obtained internship at SERI
- AI Alignment - Quinn Dougherty - Podcast - Multi-agent Reinforcement Learning in Sequential Social Dilemmas
- AI Alignment - Quinn Dougherty - Post (EA/LW/AF) - High Impact Careers in Formal Verification: Artificial Intelligence (28)
- AI Alignment - Samuel Knoche - Code - Creating code [1%]
- Meta or Community Building - Anonymous - Placement - Job - Obtained job for the EA org Momentum
- Meta or Community Building - Denisa Pop - Course - Introduction into Group Facilitation
- Meta or Community Building - Jack Harley - Project - Created Longevity wiki and expanded the team to 8 members
- Meta or Community Building - Quinn Dougherty - Event - Organisation - organising an event for AI Philadelphia, inviting anti-aging expert Jack Harley
- Meta or Community Building - Quinn Dougherty - Post (EA/LW/AF) - Cliffnotes to Craft of Research parts I, II, and III (7)
- Meta or Community Building - Quinn Dougherty - Post (EA/LW/AF) - What am I fighting for? (12)
- X-Risks - Quinn Dougherty - Post (EA/LW/AF) - Transcript: EA Philly's Infodemics Event Part 2: Aviv Ovadya (10)
2021 Q1
- AI Alignment - Michele Campolo - Post (EA/LW/AF) - Contribution to Literature Review on Goal-Directedness [20%] (71; 32)
- AI Alignment - Quinn Dougherty - Podcast - Technical AI Safety Podcast Episode 2
- AI Alignment - Quinn Dougherty - Podcast - Technical AI Safety Podcast
- AI Alignment - Quinn Dougherty - Podcast - Technical AI Safety Podcast Episode 1
- AI Alignment - Quinn Dougherty - Podcast - Technical AI Safety Podcast Episode 3
2020 Q4
- Meta or Community Building - CEEALAR - Statement - We had little in the way of concrete outputs this quarter due to diminished numbers (pandemic lockdown)
2020 Q3
- AI Alignment - Luminita Bogatean - Course - enrolled in the Open University’s bachelor degree Computing & IT and Design [20%]
- AI Alignment - Michele Campolo - Post (EA/LW/AF) - Decision Theory is multifaceted [25%] (9; 7)
- AI Alignment - Michele Campolo - Post (EA/LW/AF) - Goals and short descriptions [25%] (14; 7)
- AI Alignment - Michele Campolo - Post (EA/LW/AF) - Postponing research can sometimes be the optimal decision [25%] (29)
- Animal Welfare - Rhys Southan - Placement - PhD - Applied to a number of PhD programs in Philosophy and took up a place at Oxford University (Researching “Personal Identity, Value, and Ethics for Animals and AIs”) [40%]
- Global Health and Development - Derek Foster - Paper (Preprint) - Option-based guarantees to accelerate urgent, high risk vaccines: A new market-shaping approach [Preprint] [70%]
- Global Health and Development - Derek Foster - Paper (Published) - Evaluating use cases for human challenge trials in accelerating SARS-CoV-2 vaccine development. Clinical Infectious Diseases. [70%]
- Global Health and Development - Kris Gulati - Course - Audited M208 (Pure Maths) Linear Algebra and Real Analysis, The Open University
- Global Health and Development - Kris Gulati - Statement - “All together I spent approximately 9/10 months in total at the Hotel [CEEALAR] (I had appendicitis and had a few breaks during my stay). The time at the Hotel was incredibly valuable to me. I completed the first year of a Maths degree via The Open University (with Distinction). On top of this, I self-studied Maths and Statistics (a mixture of Open University and MIT Opencourseware resources), covering pre-calculus, single-variable calculus, multivariable calculus, linear algebra, real analysis, probability theory, and statistical theory/applied statistics. This provided me with the mathematics/statistics knowledge to complete the coursework components at top-tier Economics PhD programmes.The Hotel [CEEALAR] also gave me the time to apply for PhD programmes. Sadly, I didn’t succeed in obtaining scholarships for my target school – The London School of Economics. However, I did receive a fully funded offer to study a two-year MRes in Economics at The University of Glasgow. Conditional upon doing well at Glasgow, the two-year MRes enables me to apply to top-tier PhD programmes afterwards. During my stay, I worked on some academic research (my MSc thesis, and an old anthropology paper), which will help my later PhD applications. I applied for a variety of large grants at OpenPhil and other EA organisations (which weren’t successful). I also applied to a fellowship at Wasteland Research (I reached the final round), which I couldn’t follow up on due to other work commitments (although I hope to apply in the future). Finally, I developed a few research ideas while at the Hotel. I’m now working on obtaining data socio-economic data on academic Economists. I’m also planning on running/hosting an experiment that tries to find the most convincing argument for long-termism. These ideas were conceived at the Hotel and I received a lot of feedback/help from current and previous residents. Counterfactually – if I wasn’t at the Hotel [CEEALAR] – I would have probably only been able to complete half of the Maths/Stats I learned. I probably wouldn’t have applied to any of the scholarships/grants/fellowships because I heard about them via residents at the Hotel. I also probably wouldn’t have had time to focus on completing my older research papers. Similarly, discussions with other residents spurred the new research ideas I’m working on.” [0%]
- Meta or Community Building - Denisa Pop - Placement - Internship - Incubatee and graduate of Charity Entrepreneurship 2020 Incubation Program [50%]
- Meta or Community Building - Samuel Knoche - Other writing - Blog - Blogs I’ve Been Reading [1%]
- Meta or Community Building - Samuel Knoche - Other writing - Blog - Books I’ve Been Reading [1%]
- Meta or Community Building - Samuel Knoche - Other writing - Blog - List of Peter Thiel’s Online Writings [5%]
- Meta or Community Building - Samuel Knoche - Post (EA/LW/AF) - The Best Educational Institution in the World [1%] (12)
- Meta or Community Building - Samuel Knoche - Post (EA/LW/AF) - The Case for Education [1%] (23)
- X-Risks - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
2020 Q2
- AI Alignment - Michele Campolo - Post (EA/LW/AF) - Contributions to the sequence Thoughts on Goal-Directedness [25%] (97; 44)
- Global Health and Development - Derek Foster - Other writing - Project - Some of the content of https://1daysooner.org/ [70%]
- Global Health and Development - Derek Foster - Other writing - Project - Various unpublished documents for the Happier Lives Institute [80%]
- Global Health and Development - Derek Foster - Other writing - Report - Some confidential COVID-19-related policy reports [70%]
- Global Health and Development - Derek Foster - Paper (Preprint) - Modelling the Health and Economic Impacts of Population-Wide Testing, Contact Tracing and Isolation (PTTI) Strategies for COVID-19 in the UK [Preprint]
- Global Health and Development - Derek Foster - Post (EA/LW/AF) - Pueyo: How to Do Testing and Contact Tracing [Summary] [70%] (7)
- Global Health and Development - Derek Foster - Post (EA/LW/AF) - Market-shaping approaches to accelerate COVID-19 response: a role for option-based guarantees? [70%] (38)
- Global Health and Development - Derek Foster - Project - Parts of the Survey of COVID-19 Responses to Understand Behaviour (SCRUB)
- Global Health and Development - Kris Gulati - Placement - PhD - Applied to a number of PhD programmes in Economics, and took up a place at Glasgow University
- Meta or Community Building - Denisa Pop - Placement - Internship - Interned as a mental health research analyst at Charity Entrepreneurship [50%]
- Meta or Community Building - Luminita Bogatean - Placement - Job - Becoming Operations Manager at CEEALAR
- Meta or Community Building - Samuel Knoche - Other writing - Blog - Students are Employees [1%]
- Meta or Community Building - Samuel Knoche - Other writing - Blog - My Quarantine Reading List [1%]
- Meta or Community Building - Samuel Knoche - Other writing - Blog - The End of Education [1%]
- Meta or Community Building - Samuel Knoche - Other writing - Blog - How Steven Pinker Could Become Really Rich [1%]
- Meta or Community Building - Samuel Knoche - Other writing - Blog - The Public Good of Education [1%]
- Meta or Community Building - Samuel Knoche - Other writing - Blog - Trillion Dollar Bills on the Sidewalk [1%]
- Meta or Community Building - Samuel Knoche - Other writing - Blog - On Schools and Churches [1%]
- Meta or Community Building - Samuel Knoche - Other writing - Blog - Questions to Guide Life and Learning [1%]
- Meta or Community Building - Samuel Knoche - Other writing - Blog - Online Standardized Tests Are a Bad Idea [1%]
- Meta or Community Building - Samuel Knoche - Other writing - Blog - Harari on Religions and Ideologies [1%]
- Meta or Community Building - Samuel Knoche - Other writing - Blog - The Single Best Policy to Combat Climate Change [1%]
- Meta or Community Building - Samuel Knoche - Post (EA/LW/AF) - Patrick Collison on Effective Altruism [1%] (97)
- X-Risks - Aron Mill - Other writing - Project - Helped Launch the Food Systems Handbook (announcement) (30)
- X-Risks - Aron Mill - Post (EA/LW/AF) - Food Crisis - Cascading Events from COVID-19 & Locusts (97)
- X-Risks - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
2020 Q1
- AI Alignment - Michele Campolo - Post (EA/LW/AF) - Wireheading and discontinuity [25%] (21; 11)
- Animal Welfare - Rhys Southan - Other writing - Talk - Accepted to give a talk and a poster to the academic session of EAG 2020 in San Francisco
- Global Health and Development - Derek Foster - Other writing - Project - A confidential evaluation of an anxiety app [90%]
- Global Health and Development - Kris Gulati - Course - Distinctions in M140 (Statistics), The Open University
- Global Health and Development - Kris Gulati - Course - Distinctions in MST125 (Mathematics), The Open University
- Meta or Community Building - Samuel Knoche - Other writing - Blog - What I Learned Dropping Out of High School [1%]
- X-Risks - David Kristoffersson - Post (EA/LW/AF) - State Space of X-Risk Trajectories [95%] (24)
- X-Risks - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - David Kristoffersson - Post (EA/LW/AF) - The ‘far future’ is not just the far future [99%] (30)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - Michael Aird - Post (EA/LW/AF) - Four components of strategy research [50%] (26)
- X-Risks - Michael Aird - Post (EA/LW/AF) - Using vector fields to visualise preferences and make them consistent [90%] (41; 8)
- X-Risks - Michael Aird - Post (EA/LW/AF) - Value uncertainty [95%] (19)
2019 Q4
- AI Alignment - Anonymous 2 - Post (EA/LW/AF) - Some Comments on “Goodhart Taxonomy” [1%] (9; 4)
- AI Alignment - Anonymous 2 - Post (EA/LW/AF) - What are we assuming about utility functions? [1%] (17; 9)
- AI Alignment - Anonymous 2 - Post (EA/LW/AF) - Critiquing “What failure looks like” (featured in MIRI’s Jan 2020 Newsletter) [1%] (35; 17)
- AI Alignment - Anonymous 2 - Post (EA/LW/AF) - 8 AIS ideas [1%]
- AI Alignment - Linda Linsefors - Event - Organisation - Organized the AI Safety Learning By Doing Workshop (August and October 2019)
- AI Alignment - Luminita Bogatean - Course - Course: Python Programming: A Concise Introduction [20%]
- AI Alignment - Michele Campolo - Post (EA/LW/AF) - Thinking of tool AIs [0%] (6)
- AI Alignment - [multiple people] - Event - AI Safety - AI Safety Learning By Doing Workshop (October 2019)
- AI Alignment - [multiple people] - Event - X-risk - AI Strategy and X-Risk Unconference (AIXSU)
- AI Alignment - Rafe Kennedy - Placement - Job - A job at the Machine Intelligence Research Institute (MIRI) (following a 3 month work trial)
- AI Alignment - Samuel Knoche - Code - Code for Style Transfer, Deep Dream and Pix2Pix implementation [5%]
- AI Alignment - Samuel Knoche - Code - NLP implementations [5%]
- AI Alignment - Samuel Knoche - Code - Code for lightweight Python deep learning library [5%]
- Animal Welfare - Rhys Southan - Paper (Submitted) - Wrote an academic philosophy essay about a problem for David Benatar’s pessimism about life and death, and submitted it to an academic journal. [10%]
- Animal Welfare - Rhys Southan - Placement - Job - “I got a paid job writing an index for a book by a well-known moral philosopher. This job will help me continue to financially contribute to the EA Hotel [CEEALAR].” [20%]
- Global Health and Development - Anders Huitfeldt - Paper (Published) - Scientific Article: Huitfeldt, A., Swanson, S. A., Stensrud, M. J., & Suzuki, E. (2019). Effect heterogeneity and variable selection for standardizing causal effects to a target population. European Journal of Epidemiology.
- Global Health and Development - Anders Huitfeldt - Post (EA/LW/AF) - Post on EA Forum: Effect heterogeneity and external validity (6)
- Global Health and Development - Anders Huitfeldt - Post (EA/LW/AF) - Post on LessWrong: Effect heterogeneity and external validity in medicine (49)
- Global Health and Development - Kris Gulati - Course - Completed MA100 (Mathematical Methods)[auditing module], London School of Economics
- Global Health and Development - Kris Gulati - Course - Completed GV100 (Intro to Political Theory)[auditing module], London School of Economics
- Global Health and Development - Kris Gulati - Course - Completed ‘Justice’ (Harvard MOOC; Verified Certificate)
- Meta or Community Building - Samuel Knoche - Code - Lottery Ticket Hypothesis [5%]
- X-Risks - David Kristoffersson - Event - Organisation - Organised AI Strategy and X-Risk Unconference (AIXSU) [1%] (5)
- X-Risks - Markus Salmela - Paper (Published) - Joined the design team for the upcoming AI Strategy role-playing game Intelligence Rising and organised a series of events for testing the game [15%]
2019 Q3
- AI Alignment - Anonymous 2 - Post (EA/LW/AF) - Cognitive Dissonance and Veg*nism (7)
- AI Alignment - Anonymous 2 - Post (EA/LW/AF) - Non-anthropically, what makes us think human-level intelligence is possible? (9)
- AI Alignment - Anonymous 2 - Post (EA/LW/AF) - What are concrete examples of potential “lock-in” in AI research? (17; 11)
- AI Alignment - Anonymous 2 - Post (EA/LW/AF) - Distance Functions are Hard (31; 11)
- AI Alignment - Anonymous 2 - Post (EA/LW/AF) - The Moral Circle is not a Circle (38)
- AI Alignment - Davide Zagami - Paper (Published) - Coauthored the paper Categorizing Wireheading in Partially Embedded Agents, and presented a poster at the AI Safety Workshop in IJCAI 2019 [15%]
- AI Alignment - Linda Linsefors - Event - Organisation - Organized the AI Safety Technical Unconference (August 2019) (retrospective written by a participant) (36)
- AI Alignment - Linda Linsefors - Event - Organisation - Organized the AI Safety Learning By Doing Workshop (August and October 2019)
- AI Alignment - Linda Linsefors - Statement - “I think the biggest impact EA Hotel did for me, was about self growth. I got a lot of help to improve, but also the time and freedom to explore. I tried some projects that did not lead anywhere, like Code to Give. But getting to explore was necessary for me to figure out what to do. I finally landed on organising, which I’m still doing. AI Safety Support probably would not have existed with out the hotel.” [0%]
- AI Alignment - [multiple people] - Event - AI Safety - AI Safety Learning By Doing Workshop (August 2019)
- AI Alignment - [multiple people] - Event - AI Safety - AI Safety Technical Unconference (August 2019) (retrospective written by a participant)
- Animal Welfare - Magnus Vinding - Idea - “I got the idea to write the book I’m currently writing (“Suffering-Focused Ethics”)”. [50%]
- Animal Welfare - Magnus Vinding - Paper (Revising) - Revising journal paper for Between the Species. (“Got feedback and discussion about it I couldn’t have had otherwise; one reviewer happened to be a guest at the hotel [CEEALAR].”)
- Animal Welfare - Max Carpendale - Post (EA/LW/AF) - Interview with Michael Tye about invertebrate consciousness [50%] (32)
- Animal Welfare - Max Carpendale - Post (EA/LW/AF) - My recommendations for gratitude exercises [50%] (40)
- Animal Welfare - Nix Goldowsky-Dill - Other writing - Comment - EA Forum Comment Prize ($50), July 2019, for “comments on the impact of corporate cage-free campaigns” (11)
- Animal Welfare - Rhys Southan - Other writing - Essay - Published an essay, Re-Orientation, about some of the possible personal and societal implications of sexual orientation conversion drugs that actually work [90%]
- Animal Welfare - Rhys Southan - Placement - Job - Edited and partially rewrote a book on meat, treatment of farmed animals, and alternatives to factory farming (as a paid job [can’t yet name the book or its author due to non-disclosure agreement]) [70%]
- Global Health and Development - Derek Foster - Post (EA/LW/AF) - Rethink Grants: an evaluation of Donational’s Corporate Ambassador Program [95%] (54)
- Global Health and Development - Kris Gulati - Course - Distinction in MU123 (Mathematics), The Open University
- Global Health and Development - Kris Gulati - Course - Distinctions in MST124 (Mathematics), The Open University
- X-Risks - David Kristoffersson - Project - Applied for 501c3 non-profit status for Convergence [non-profit status approved in 2019] [95%]
- X-Risks - Justin Shovelain - Project - Got non-profit status for Convergence Analysis and established it legally [90%]
2019 Q2
- Animal Welfare - Max Carpendale - Post (EA/LW/AF) - My recommendations for RSI treatment [25%] (77)
- Animal Welfare - Max Carpendale - Post (EA/LW/AF) - Thoughts on the welfare of farmed insects [50%] (34)
- Animal Welfare - Max Carpendale - Post (EA/LW/AF) - Interview with Shelley Adamo about invertebrate consciousness [50%] (37)
- Animal Welfare - Max Carpendale - Post (EA/LW/AF) - Interview with Jon Mallatt about invertebrate consciousness (winner of 1st place EA Forum Prize for Apr 2019) [50%] (83)
- Meta or Community Building - Denisa Pop - Other writing - Talk - Researched and developing presentations and workshops in Rational Compassion: see How we might save the world by becoming super-dogs [0%]
- Meta or Community Building - Denisa Pop - Project - Becoming Interim Community & Projects Manager at CEEALAR and offering residents counseling/coaching sessions (productivity & mental health) [0%]
- Meta or Community Building - Matt Goldenberg - Event - Organisation - Organizer and instructor for the Athena Rationality Workshop (June 2019)
- Meta or Community Building - [multiple people] - Event - Rationality Workshop - Athena Rationality Workshop (June 2019) (retrospective) (24)
2019 Q1
- AI Alignment - Anonymous 1 - Course - MITx Probability
- AI Alignment - Anonymous 1 - Course - Model Thinking
- AI Alignment - Anonymous 1 - Course - Probabilistic Graphical Models
- AI Alignment - Chris Leong - Post (EA/LW/AF) - Deconfusing Logical Counterfactuals [75%] (27; 6)
- AI Alignment - Chris Leong - Post (EA/LW/AF) - Debate AI and the Decision to Release an AI [90%] (9)
- AI Alignment - Davide Zagami - Post (EA/LW/AF) - RAISE lessons on Inverse Reinforcement Learning + their Supplementary Material [1%] (20)
- AI Alignment - Davide Zagami - Post (EA/LW/AF) - RAISE lessons on Fundamentals of Formalization [5%] (28)
- AI Alignment - John Maxwell - Course - MITx Probability
- AI Alignment - John Maxwell - Course - Improving Your Statistical Inferences (21 hours)
- AI Alignment - John Maxwell - Course - Statistical Learning
- AI Alignment - Linda Linsefors - Post (EA/LW/AF) - The Game Theory of Blackmail (“I don’t remember where the ideas behind this post came from, so it is hard for me to say what the counterfactual would have been. However, I did get help improving the post from other residents, so it would at least be less well written without the hotel.“) (25; 6)
- AI Alignment - Linda Linsefors - Post (EA/LW/AF) - Optimization Regularization through Time Penalty (“This post resulted from conversations at the EA Hotel [CEEALAR] and would not therefore not have happened without the hotel.”) [0%] (11; 7)
- AI Alignment - RAISE - Course Module produced - Nearly the entirety of this online course was created by grantees
- AI Alignment - RAISE - Course Module produced - Nearly the entirety of this online course was created by grantees
- AI Alignment - RAISE - Course Module produced - Nearly the entirety of this online course was created by grantees
- Animal Welfare - Max Carpendale - Post (EA/LW/AF) - Sharks probably do feel pain: a reply to Michael Tye and others [50%] (21)
- Animal Welfare - Max Carpendale - Post (EA/LW/AF) - The Evolution of Sentience as a Factor in the Cambrian Explosion: Setting up the Question [50%] (30)
- Animal Welfare - Saulius Šimčikas - Post (EA/LW/AF) - Will companies meet their animal welfare commitments? (winner of 3rd place EA Forum Prize for Feb 2019) [96%] (115)
- Animal Welfare - Saulius Šimčikas - Post (EA/LW/AF) - Rodents farmed for pet snake food [99%] (75)
- Global Health and Development - Derek Foster - Other writing - Book Chapter - Priority Setting in Healthcare Through the Lens of Happiness – Chapter 3 of the 2019 Global Happiness & Wellbeing Policy Report [99%]
- Meta or Community Building - Denisa Pop - Event - Organisation - Helped organise the EA Values-to-Actions Retreat [33%]
- Meta or Community Building - Denisa Pop - Event - Organisation - Helped organise the EA Community Health Unconference [33%]
- Meta or Community Building - Matt Goldenberg - Code - The entirety of Project Metis [5%]
- Meta or Community Building - Matt Goldenberg - Post (EA/LW/AF) - What Vibing Feels Like [5%] (19)
- Meta or Community Building - Matt Goldenberg - Post (EA/LW/AF) - A Framework for Internal Debugging [5%] (41)
- Meta or Community Building - Matt Goldenberg - Post (EA/LW/AF) - How to Understand and Mitigate Risk [5%] (55)
- Meta or Community Building - Matt Goldenberg - Post (EA/LW/AF) - S-Curves for Trend Forecasting [5%] (112)
- Meta or Community Building - Matt Goldenberg - Post (EA/LW/AF) - The 3 Books Technique for Learning a New Skill [5%] (193)
- Meta or Community Building - [multiple people] - Event - Retreat - EA Glasgow (March 2019)
- Meta or Community Building - Toon Alfrink - Post (EA/LW/AF) - EA is vetting-constrained [10%] (129)
- Meta or Community Building - Toon Alfrink - Post (EA/LW/AF) - Task Y: representing EA in your field [90%] (11)
- Meta or Community Building - Toon Alfrink - Post (EA/LW/AF) - What makes a good culture? [90%] (29)
- Meta or Community Building - Toon Alfrink - Post (EA/LW/AF) - The Home Base of EA [90%]
- X-Risks - David Kristoffersson - Other writing - Talk - Designed Convergence presentation (slides, notes) and held it at the Future of Humanity Institute [80%]
- X-Risks - David Kristoffersson - Project - Defined a recruitment plan for a researcher-writer role and publicized a job ad [90%]
- X-Risks - David Kristoffersson - Project - Built new website for Convergence [90%]
- X-Risks - Markus Salmela - Paper (Published) - Coauthored the paper Long-Term Trajectories of Human Civilization [99%]
2018 Q4
- AI Alignment - Anonymous 1 - Post (EA/LW/AF) - Believing others’ priors (8)
- AI Alignment - Anonymous 1 - Post (EA/LW/AF) - AI development incentive gradients are not uniformly terrible (21)
- AI Alignment - Anonymous 1 - Post (EA/LW/AF) - Should donor lottery winners write reports? (29)
- AI Alignment - Chris Leong - Post (EA/LW/AF) - On Abstract Systems [50%] (14)
- AI Alignment - Chris Leong - Post (EA/LW/AF) - On Disingenuity [50%] (28)
- AI Alignment - Chris Leong - Post (EA/LW/AF) - Summary: Surreal Decisions [50%] (29)
- AI Alignment - Chris Leong - Post (EA/LW/AF) - An Extensive Categorisation of Infinite Paradoxes [80%] [80%] (-14)
- AI Alignment - John Maxwell - Course - ARIMA Modeling with R
- AI Alignment - John Maxwell - Course - Introduction to Recommender Systems (20-48 hours)
- AI Alignment - John Maxwell - Course - Formal Software Verification
- AI Alignment - John Maxwell - Course - Text Mining and Analytics
- Animal Welfare - Frederik Bechtold - Placement - Internship - Received an (unpaid) internship at Animal Ethics [1%]
- Animal Welfare - Max Carpendale - Post (EA/LW/AF) - Why I’m focusing on invertebrate sentience [75%] (56)
- Global Health and Development - Derek Foster - Placement - Job - Hired as a research analyst for Rethink Priorities [95%]
- Meta or Community Building - Toon Alfrink - Post (EA/LW/AF) - The housekeeper [10%] (23)
- Meta or Community Building - Toon Alfrink - Post (EA/LW/AF) - We can all be high status [10%] (62)
- X-Risks - David Kristoffersson - Project - Incorporated Convergence [95%]
2018 Q3
- AI Alignment - Anonymous 1 - Post (EA/LW/AF) - Annihilating aliens & Rare Earth suggest early filter (9)
- AI Alignment - John Maxwell - Course - Introduction to Time Series Analysis
- AI Alignment - John Maxwell - Course - Regression Models
- Animal Welfare - Magnus Vinding - Other writing - Essay - Why Altruists Should Perhaps Not Prioritize Artificial Intelligence: A Lengthy Critique [99%]
- Animal Welfare - Max Carpendale - Placement - Job - Got a research position (part-time) at Animal Ethics [25%]
- Meta or Community Building - [multiple people] - Event - Retreat - EA London Retreats: Life Review Weekend (Aug. 24th – 27th 2018); Careers Week (Aug. 27th – 31st 2018); Holiday/EA Unconference (Aug. 31st – Sept. 3rd 2018)
Aaron Maiwald
Anders Huitfeldt
Anonymous
Anonymous 1
Anonymous 2
Aron Mill
Can Rager
CEEALAR
Charlie Steiner
Chris Leong
Daniela Tiznado
Davide Zagami
David King
David Kristoffersson
David Matlosci
Denisa Pop
Derek Foster
Frederik Bechtold
Guillaume Corlouer
Hamish Huggard
Jack Harley
Jaeson Booker
John Maxwell
Justin Shovelain
Kris Gulati
Laura C
Key:
- Cause Area - Quarter - Output Type - Title with link[C%*] (K**)
*C% = percentage counterfactual likelihood of happening without CEEALAR.
**K = Karma on EA Forum, Less Wrong, (Less Wrong; Alignment Forum).
Aaron Maiwald
- AI Alignment - 2022 Q2 - Course - courses on mathematics part of his bachelor's degree in Cognitive Science
- AI Alignment - 2022 Q2 - Course - courses on ML part of his bachelor's degree in Cognitive Science
- Meta or Community Building - 2021 Q4 - Podcast - 70% of the work for the Gutes Einfach Tun Podcast Episodes : 3, 4 ,5 and 6
- Meta or Community Building - 2021 Q3 - Podcast - Gutes Einfach Tun Podcast lauch & Episode 1
- Meta or Community Building - 2021 Q3 - Podcast - Gutes Einfach Tun Podcast lauch & Episode 2
Anders Huitfeldt
- Global Health and Development - 2019 Q4 - Paper (Published) - Scientific Article: Huitfeldt, A., Swanson, S. A., Stensrud, M. J., & Suzuki, E. (2019). Effect heterogeneity and variable selection for standardizing causal effects to a target population. European Journal of Epidemiology.
- Global Health and Development - 2019 Q4 - Post (EA/LW/AF) - Post on EA Forum: Effect heterogeneity and external validity (6)
- Global Health and Development - 2019 Q4 - Post (EA/LW/AF) - Post on LessWrong: Effect heterogeneity and external validity in medicine (49)
Anonymous
- Meta or Community Building - 2021 Q2 - Placement - Job - Obtained job for the EA org Momentum
Anonymous 1
- AI Alignment - 2019 Q1 - Course - MITx Probability
- AI Alignment - 2019 Q1 - Course - Model Thinking
- AI Alignment - 2019 Q1 - Course - Probabilistic Graphical Models
- AI Alignment - 2018 Q4 - Post (EA/LW/AF) - Believing others’ priors (8)
- AI Alignment - 2018 Q4 - Post (EA/LW/AF) - AI development incentive gradients are not uniformly terrible (21)
- AI Alignment - 2018 Q4 - Post (EA/LW/AF) - Should donor lottery winners write reports? (29)
- AI Alignment - 2018 Q3 - Post (EA/LW/AF) - Annihilating aliens & Rare Earth suggest early filter (9)
Anonymous 2
- AI Alignment - 2019 Q4 - Post (EA/LW/AF) - Some Comments on “Goodhart Taxonomy” [1%] (9; 4)
- AI Alignment - 2019 Q4 - Post (EA/LW/AF) - What are we assuming about utility functions? [1%] (17; 9)
- AI Alignment - 2019 Q4 - Post (EA/LW/AF) - Critiquing “What failure looks like” (featured in MIRI’s Jan 2020 Newsletter) [1%] (35; 17)
- AI Alignment - 2019 Q4 - Post (EA/LW/AF) - 8 AIS ideas [1%]
- AI Alignment - 2019 Q3 - Post (EA/LW/AF) - Cognitive Dissonance and Veg*nism (7)
- AI Alignment - 2019 Q3 - Post (EA/LW/AF) - Non-anthropically, what makes us think human-level intelligence is possible? (9)
- AI Alignment - 2019 Q3 - Post (EA/LW/AF) - What are concrete examples of potential “lock-in” in AI research? (17; 11)
- AI Alignment - 2019 Q3 - Post (EA/LW/AF) - Distance Functions are Hard (31; 11)
- AI Alignment - 2019 Q3 - Post (EA/LW/AF) - The Moral Circle is not a Circle (38)
Aron Mill
- X-Risks - 2020 Q2 - Other writing - Project - Helped Launch the Food Systems Handbook (announcement) (30)
- X-Risks - 2020 Q2 - Post (EA/LW/AF) - Food Crisis - Cascading Events from COVID-19 & Locusts (97)
Can Rager
- AI Alignment - 2023 Q2 - Post (EA/LW/AF) - Written a post on Understanding mesa-optimisation using toy models (37)
- AI Alignment - 2023 Q1 - Code - Prepared for and conducted the coding test for the ARENA programme
- AI Alignment - 2023 Q1 - Other writing - Report - Developed a research proposal
- Meta or Community Building - 2023 Q1 - Event - AI Safety - Conducted online training for software engineering and mechanistic interpretability
CEEALAR
- Meta or Community Building - 2022 Q4 - Statement - We had little in the way of concrete outputs this quarter due to diminished numbers (building maintenance)
- Meta or Community Building - 2020 Q4 - Statement - We had little in the way of concrete outputs this quarter due to diminished numbers (pandemic lockdown)
Charlie Steiner
- AI Alignment - 2021 Q4 - Post (EA/LW/AF) - Reducing Goodhart sequence (115; 71)
Chris Leong
- AI Alignment - 2023 Q2 - Course - Got accepted to the online training + research sprint component of John Wentworth's SERI MATS stream
- Meta or Community Building - 2023 Q1 - Project - Worked together with Abram Demski on an Adversarial Collaboration project
- AI Alignment - 2019 Q1 - Post (EA/LW/AF) - Deconfusing Logical Counterfactuals [75%] (27; 6)
- AI Alignment - 2019 Q1 - Post (EA/LW/AF) - Debate AI and the Decision to Release an AI [90%] (9)
- AI Alignment - 2018 Q4 - Post (EA/LW/AF) - On Abstract Systems [50%] (14)
- AI Alignment - 2018 Q4 - Post (EA/LW/AF) - On Disingenuity [50%] (28)
- AI Alignment - 2018 Q4 - Post (EA/LW/AF) - Summary: Surreal Decisions [50%] (29)
- AI Alignment - 2018 Q4 - Post (EA/LW/AF) - An Extensive Categorisation of Infinite Paradoxes [80%] [80%] (-14)
Daniela Tiznado
- Meta or Community Building - 2023 Q2 - Project - 3 EA women collaborated at CEEALAR to co-work on building a global support network for EA Women [10%]
Davide Zagami
- AI Alignment - 2019 Q3 - Paper (Published) - Coauthored the paper Categorizing Wireheading in Partially Embedded Agents, and presented a poster at the AI Safety Workshop in IJCAI 2019 [15%]
- AI Alignment - 2019 Q1 - Post (EA/LW/AF) - RAISE lessons on Inverse Reinforcement Learning + their Supplementary Material [1%] (20)
- AI Alignment - 2019 Q1 - Post (EA/LW/AF) - RAISE lessons on Fundamentals of Formalization [5%] (28)
David King
- AI Alignment - 2022 Q1 - Course - AGISF
David Kristoffersson
- X-Risks - 2020 Q3 - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q3 - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q3 - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q2 - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q2 - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q2 - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q2 - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q2 - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q1 - Post (EA/LW/AF) - State Space of X-Risk Trajectories [95%] (24)
- X-Risks - 2020 Q1 - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q1 - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q1 - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q1 - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q1 - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q1 - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q1 - Post (EA/LW/AF) - The ‘far future’ is not just the far future [99%] (30)
- X-Risks - 2019 Q4 - Event - Organisation - Organised AI Strategy and X-Risk Unconference (AIXSU) [1%] (5)
- X-Risks - 2019 Q3 - Project - Applied for 501c3 non-profit status for Convergence [non-profit status approved in 2019] [95%]
- X-Risks - 2019 Q1 - Other writing - Talk - Designed Convergence presentation (slides, notes) and held it at the Future of Humanity Institute [80%]
- X-Risks - 2019 Q1 - Project - Defined a recruitment plan for a researcher-writer role and publicized a job ad [90%]
- X-Risks - 2019 Q1 - Project - Built new website for Convergence [90%]
- X-Risks - 2018 Q4 - Project - Incorporated Convergence [95%]
David Matlosci
- AI Alignment - 2022 Q3 - Course - got selected to the second phase of SERI MATS and went to California to participate in it
Denisa Pop
- Meta or Community Building - 2022 Q1 - Project - $9,000 grant from the Effective Altruism Infrastructure Fund to organize group activities for improving self-awareness and communication in EAs [50%]
- Meta or Community Building - 2021 Q2 - Course - Introduction into Group Facilitation
- Meta or Community Building - 2020 Q3 - Placement - Internship - Incubatee and graduate of Charity Entrepreneurship 2020 Incubation Program [50%]
- Meta or Community Building - 2020 Q2 - Placement - Internship - Interned as a mental health research analyst at Charity Entrepreneurship [50%]
- Meta or Community Building - 2019 Q2 - Other writing - Talk - Researched and developing presentations and workshops in Rational Compassion: see How we might save the world by becoming super-dogs [0%]
- Meta or Community Building - 2019 Q2 - Project - Becoming Interim Community & Projects Manager at CEEALAR and offering residents counseling/coaching sessions (productivity & mental health) [0%]
- Meta or Community Building - 2019 Q1 - Event - Organisation - Helped organise the EA Values-to-Actions Retreat [33%]
- Meta or Community Building - 2019 Q1 - Event - Organisation - Helped organise the EA Community Health Unconference [33%]
Derek Foster
- Global Health and Development - 2020 Q3 - Paper (Preprint) - Option-based guarantees to accelerate urgent, high risk vaccines: A new market-shaping approach [Preprint] [70%]
- Global Health and Development - 2020 Q3 - Paper (Published) - Evaluating use cases for human challenge trials in accelerating SARS-CoV-2 vaccine development. Clinical Infectious Diseases. [70%]
- Global Health and Development - 2020 Q2 - Other writing - Project - Some of the content of https://1daysooner.org/ [70%]
- Global Health and Development - 2020 Q2 - Other writing - Project - Various unpublished documents for the Happier Lives Institute [80%]
- Global Health and Development - 2020 Q2 - Other writing - Report - Some confidential COVID-19-related policy reports [70%]
- Global Health and Development - 2020 Q2 - Paper (Preprint) - Modelling the Health and Economic Impacts of Population-Wide Testing, Contact Tracing and Isolation (PTTI) Strategies for COVID-19 in the UK [Preprint]
- Global Health and Development - 2020 Q2 - Post (EA/LW/AF) - Pueyo: How to Do Testing and Contact Tracing [Summary] [70%] (7)
- Global Health and Development - 2020 Q2 - Post (EA/LW/AF) - Market-shaping approaches to accelerate COVID-19 response: a role for option-based guarantees? [70%] (38)
- Global Health and Development - 2020 Q2 - Project - Parts of the Survey of COVID-19 Responses to Understand Behaviour (SCRUB)
- Global Health and Development - 2020 Q1 - Other writing - Project - A confidential evaluation of an anxiety app [90%]
- Global Health and Development - 2019 Q3 - Post (EA/LW/AF) - Rethink Grants: an evaluation of Donational’s Corporate Ambassador Program [95%] (54)
- Global Health and Development - 2019 Q1 - Other writing - Book Chapter - Priority Setting in Healthcare Through the Lens of Happiness – Chapter 3 of the 2019 Global Happiness & Wellbeing Policy Report [99%]
- Global Health and Development - 2018 Q4 - Placement - Job - Hired as a research analyst for Rethink Priorities [95%]
Frederik Bechtold
- Animal Welfare - 2018 Q4 - Placement - Internship - Received an (unpaid) internship at Animal Ethics [1%]
Guillaume Corlouer
- AI Alignment - 2023 Q1 - Course - Got selected at the PIBBSS Fellowship
Hamish Huggard
- AI Alignment - 2023 Q1 - Project - Video on AI safety
- AI Alignment - 2023 Q1 - Project - Animated the AXRP video "How do neural networks do modular addition?"
- AI Alignment - 2023 Q1 - Project - Animated the AXRP video "Vanessa Kosoy on the Monotonicity Principle"
- AI Alignment - 2023 Q1 - Project - Animated the AXRP video "What is mechanistic interpretability? Neel Nanda explains"
- Meta or Community Building - 2023 Q1 - Project - Built and launched https://aisafety.world/
- Meta or Community Building - 2023 Q1 - Project - Video to help an EANZ fundraiser for GiveDirectly
Jack Harley
- Meta or Community Building - 2021 Q2 - Project - Created Longevity wiki and expanded the team to 8 members
Jaeson Booker
- AI Alignment - 2023 Q2 - Course - Got accepted to the online training + research sprint component of John Wentworth's SERI MATS stream
- AI Alignment - 2023 Q2 - Other writing - Review - Reviewed Policy Report to AIS-safety
- X-Risks - 2023 Q2 - Other writing - Blog - Moloch, Disaster Monkeys, and Gremlins- Blog Post to AI-safety, X-risks
- X-Risks - 2023 Q2 - Other writing - Blog - The Amoral God-King- Blog Post to AI-safety, X-risks
- X-Risks - 2023 Q2 - Other writing - Blog - The Three Heads of the Dragon- Blog Post to AI-safety, X-risks
- AI Alignment - 2023 Q1 - Other writing - Book Chapter - Completed authoring two-third of the AI Safety handbook [70%]
- AI Alignment - 2023 Q1 - Project - Created AI Safety Strategy Group' Website
- Meta or Community Building - 2023 Q1 - Project - Created AI Safety Strategy Group and GitHub Project Board [80%]
- AI Alignment - 2022 Q1 - Course - AGISF
- AI Alignment - 2021 Q4 - Code - AI Policy Simulator
John Maxwell
- AI Alignment - 2019 Q1 - Course - MITx Probability
- AI Alignment - 2019 Q1 - Course - Improving Your Statistical Inferences (21 hours)
- AI Alignment - 2019 Q1 - Course - Statistical Learning
- AI Alignment - 2018 Q4 - Course - ARIMA Modeling with R
- AI Alignment - 2018 Q4 - Course - Introduction to Recommender Systems (20-48 hours)
- AI Alignment - 2018 Q4 - Course - Formal Software Verification
- AI Alignment - 2018 Q4 - Course - Text Mining and Analytics
- AI Alignment - 2018 Q3 - Course - Introduction to Time Series Analysis
- AI Alignment - 2018 Q3 - Course - Regression Models
Justin Shovelain
- X-Risks - 2020 Q3 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q3 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q3 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q3 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q3 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q3 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q2 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q2 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q2 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q2 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q2 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q2 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q1 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q1 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q1 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q1 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q1 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q1 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q1 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q1 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2019 Q3 - Project - Got non-profit status for Convergence Analysis and established it legally [90%]
Kris Gulati
- Global Health and Development - 2020 Q3 - Course - Audited M208 (Pure Maths) Linear Algebra and Real Analysis, The Open University
- Global Health and Development - 2020 Q3 - Statement - “All together I spent approximately 9/10 months in total at the Hotel [CEEALAR] (I had appendicitis and had a few breaks during my stay). The time at the Hotel was incredibly valuable to me. I completed the first year of a Maths degree via The Open University (with Distinction). On top of this, I self-studied Maths and Statistics (a mixture of Open University and MIT Opencourseware resources), covering pre-calculus, single-variable calculus, multivariable calculus, linear algebra, real analysis, probability theory, and statistical theory/applied statistics. This provided me with the mathematics/statistics knowledge to complete the coursework components at top-tier Economics PhD programmes.The Hotel [CEEALAR] also gave me the time to apply for PhD programmes. Sadly, I didn’t succeed in obtaining scholarships for my target school – The London School of Economics. However, I did receive a fully funded offer to study a two-year MRes in Economics at The University of Glasgow. Conditional upon doing well at Glasgow, the two-year MRes enables me to apply to top-tier PhD programmes afterwards. During my stay, I worked on some academic research (my MSc thesis, and an old anthropology paper), which will help my later PhD applications. I applied for a variety of large grants at OpenPhil and other EA organisations (which weren’t successful). I also applied to a fellowship at Wasteland Research (I reached the final round), which I couldn’t follow up on due to other work commitments (although I hope to apply in the future). Finally, I developed a few research ideas while at the Hotel. I’m now working on obtaining data socio-economic data on academic Economists. I’m also planning on running/hosting an experiment that tries to find the most convincing argument for long-termism. These ideas were conceived at the Hotel and I received a lot of feedback/help from current and previous residents. Counterfactually – if I wasn’t at the Hotel [CEEALAR] – I would have probably only been able to complete half of the Maths/Stats I learned. I probably wouldn’t have applied to any of the scholarships/grants/fellowships because I heard about them via residents at the Hotel. I also probably wouldn’t have had time to focus on completing my older research papers. Similarly, discussions with other residents spurred the new research ideas I’m working on.” [0%]
- Global Health and Development - 2020 Q2 - Placement - PhD - Applied to a number of PhD programmes in Economics, and took up a place at Glasgow University
- Global Health and Development - 2020 Q1 - Course - Distinctions in M140 (Statistics), The Open University
- Global Health and Development - 2020 Q1 - Course - Distinctions in MST125 (Mathematics), The Open University
- Global Health and Development - 2019 Q4 - Course - Completed MA100 (Mathematical Methods)[auditing module], London School of Economics
- Global Health and Development - 2019 Q4 - Course - Completed GV100 (Intro to Political Theory)[auditing module], London School of Economics
- Global Health and Development - 2019 Q4 - Course - Completed ‘Justice’ (Harvard MOOC; Verified Certificate)
- Global Health and Development - 2019 Q3 - Course - Distinction in MU123 (Mathematics), The Open University
- Global Health and Development - 2019 Q3 - Course - Distinctions in MST124 (Mathematics), The Open University
Laura C
- Meta or Community Building - 2022 Q2 - Placement - Job - Becoming Operations Lead for The Berlin Longtermist Hub [50%]
Linda Linsefors
- AI Alignment - 2019 Q4 - Event - Organisation - Organized the AI Safety Learning By Doing Workshop (August and October 2019)
- AI Alignment - 2019 Q3 - Event - Organisation - Organized the AI Safety Technical Unconference (August 2019) (retrospective written by a participant) (36)
- AI Alignment - 2019 Q3 - Event - Organisation - Organized the AI Safety Learning By Doing Workshop (August and October 2019)
- AI Alignment - 2019 Q3 - Statement - “I think the biggest impact EA Hotel did for me, was about self growth. I got a lot of help to improve, but also the time and freedom to explore. I tried some projects that did not lead anywhere, like Code to Give. But getting to explore was necessary for me to figure out what to do. I finally landed on organising, which I’m still doing. AI Safety Support probably would not have existed with out the hotel.” [0%]
- AI Alignment - 2019 Q1 - Post (EA/LW/AF) - The Game Theory of Blackmail (“I don’t remember where the ideas behind this post came from, so it is hard for me to say what the counterfactual would have been. However, I did get help improving the post from other residents, so it would at least be less well written without the hotel.“) (25; 6)
- AI Alignment - 2019 Q1 - Post (EA/LW/AF) - Optimization Regularization through Time Penalty (“This post resulted from conversations at the EA Hotel [CEEALAR] and would not therefore not have happened without the hotel.”) [0%] (11; 7)
Lucas Teixeira
- AI Alignment - 2022 Q2 - Placement - Internship - Internship at Conjecture
Luminita Bogatean
- Meta or Community Building - 2022 Q2 - Course - First out of three courses in the UX Programme by CareerFoundry
- AI Alignment - 2020 Q3 - Course - enrolled in the Open University’s bachelor degree Computing & IT and Design [20%]
- Meta or Community Building - 2020 Q2 - Placement - Job - Becoming Operations Manager at CEEALAR
- AI Alignment - 2019 Q4 - Course - Course: Python Programming: A Concise Introduction [20%]
Magnus Vinding
- Animal Welfare - 2019 Q3 - Idea - “I got the idea to write the book I’m currently writing (“Suffering-Focused Ethics”)”. [50%]
- Animal Welfare - 2019 Q3 - Paper (Revising) - Revising journal paper for Between the Species. (“Got feedback and discussion about it I couldn’t have had otherwise; one reviewer happened to be a guest at the hotel [CEEALAR].”)
- Animal Welfare - 2018 Q3 - Other writing - Essay - Why Altruists Should Perhaps Not Prioritize Artificial Intelligence: A Lengthy Critique [99%]
Markus Salmela
- X-Risks - 2019 Q4 - Paper (Published) - Joined the design team for the upcoming AI Strategy role-playing game Intelligence Rising and organised a series of events for testing the game [15%]
- X-Risks - 2019 Q1 - Paper (Published) - Coauthored the paper Long-Term Trajectories of Human Civilization [99%]
Matt Goldenberg
- Meta or Community Building - 2019 Q2 - Event - Organisation - Organizer and instructor for the Athena Rationality Workshop (June 2019)
- Meta or Community Building - 2019 Q1 - Code - The entirety of Project Metis [5%]
- Meta or Community Building - 2019 Q1 - Post (EA/LW/AF) - What Vibing Feels Like [5%] (19)
- Meta or Community Building - 2019 Q1 - Post (EA/LW/AF) - A Framework for Internal Debugging [5%] (41)
- Meta or Community Building - 2019 Q1 - Post (EA/LW/AF) - How to Understand and Mitigate Risk [5%] (55)
- Meta or Community Building - 2019 Q1 - Post (EA/LW/AF) - S-Curves for Trend Forecasting [5%] (112)
- Meta or Community Building - 2019 Q1 - Post (EA/LW/AF) - The 3 Books Technique for Learning a New Skill [5%] (193)
Max Carpendale
- Animal Welfare - 2019 Q3 - Post (EA/LW/AF) - Interview with Michael Tye about invertebrate consciousness [50%] (32)
- Animal Welfare - 2019 Q3 - Post (EA/LW/AF) - My recommendations for gratitude exercises [50%] (40)
- Animal Welfare - 2019 Q2 - Post (EA/LW/AF) - My recommendations for RSI treatment [25%] (77)
- Animal Welfare - 2019 Q2 - Post (EA/LW/AF) - Thoughts on the welfare of farmed insects [50%] (34)
- Animal Welfare - 2019 Q2 - Post (EA/LW/AF) - Interview with Shelley Adamo about invertebrate consciousness [50%] (37)
- Animal Welfare - 2019 Q2 - Post (EA/LW/AF) - Interview with Jon Mallatt about invertebrate consciousness (winner of 1st place EA Forum Prize for Apr 2019) [50%] (83)
- Animal Welfare - 2019 Q1 - Post (EA/LW/AF) - Sharks probably do feel pain: a reply to Michael Tye and others [50%] (21)
- Animal Welfare - 2019 Q1 - Post (EA/LW/AF) - The Evolution of Sentience as a Factor in the Cambrian Explosion: Setting up the Question [50%] (30)
- Animal Welfare - 2018 Q4 - Post (EA/LW/AF) - Why I’m focusing on invertebrate sentience [75%] (56)
- Animal Welfare - 2018 Q3 - Placement - Job - Got a research position (part-time) at Animal Ethics [25%]
Michael Aird
- X-Risks - 2020 Q1 - Post (EA/LW/AF) - Four components of strategy research [50%] (26)
- X-Risks - 2020 Q1 - Post (EA/LW/AF) - Using vector fields to visualise preferences and make them consistent [90%] (41; 8)
- X-Risks - 2020 Q1 - Post (EA/LW/AF) - Value uncertainty [95%] (19)
Michele Campolo
- AI Alignment - 2023 Q2 - Course - Got accepted to the online training + research sprint component of John Wentworth's SERI MATS stream
- AI Alignment - 2023 Q1 - Post (EA/LW/AF) - On value in humans, other animals, and AI (7; 3)
- AI Alignment - 2022 Q2 - Post (EA/LW/AF) - Some alternative AI safety research projects [10%] (8; 2)
- AI Alignment - 2022 Q1 - Course - Attended online course on moral psychology by the University of Warwick.
- AI Alignment - 2021 Q4 - Post (EA/LW/AF) - From language to ethics by automated reasoning (8; 2)
- AI Alignment - 2021 Q3 - Event (attendance) - Attended Good AI Badger Seminar: Beyond Life-long Learning via Modular Meta-Learning
- AI Alignment - 2021 Q2 - Post (EA/LW/AF) - Naturalism and AI alignment (17; 5)
- AI Alignment - 2021 Q1 - Post (EA/LW/AF) - Contribution to Literature Review on Goal-Directedness [20%] (71; 32)
- AI Alignment - 2020 Q3 - Post (EA/LW/AF) - Decision Theory is multifaceted [25%] (9; 7)
- AI Alignment - 2020 Q3 - Post (EA/LW/AF) - Goals and short descriptions [25%] (14; 7)
- AI Alignment - 2020 Q3 - Post (EA/LW/AF) - Postponing research can sometimes be the optimal decision [25%] (29)
- AI Alignment - 2020 Q2 - Post (EA/LW/AF) - Contributions to the sequence Thoughts on Goal-Directedness [25%] (97; 44)
- AI Alignment - 2020 Q1 - Post (EA/LW/AF) - Wireheading and discontinuity [25%] (21; 11)
- AI Alignment - 2019 Q4 - Post (EA/LW/AF) - Thinking of tool AIs [0%] (6)
[multiple people]
- Meta or Community Building - 2023 Q2 - Event - Retreat - Hosted ALLFED team retreat
- AI Alignment - 2019 Q4 - Event - AI Safety - AI Safety Learning By Doing Workshop (October 2019)
- AI Alignment - 2019 Q4 - Event - X-risk - AI Strategy and X-Risk Unconference (AIXSU)
- AI Alignment - 2019 Q3 - Event - AI Safety - AI Safety Learning By Doing Workshop (August 2019)
- AI Alignment - 2019 Q3 - Event - AI Safety - AI Safety Technical Unconference (August 2019) (retrospective written by a participant)
- Meta or Community Building - 2019 Q2 - Event - Rationality Workshop - Athena Rationality Workshop (June 2019) (retrospective) (24)
- Meta or Community Building - 2019 Q1 - Event - Retreat - EA Glasgow (March 2019)
- Meta or Community Building - 2018 Q3 - Event - Retreat - EA London Retreats: Life Review Weekend (Aug. 24th – 27th 2018); Careers Week (Aug. 27th – 31st 2018); Holiday/EA Unconference (Aug. 31st – Sept. 3rd 2018)
Nia Gardner
- AI Alignment - 2023 Q2 - Event - Organisation - Organised an ML Bootcamp in Germany
Nick Stares
- Global Health and Development - 2021 Q3 - Course - CORE The Economy
Nix Goldowsky-Dill
- Animal Welfare - 2019 Q3 - Other writing - Comment - EA Forum Comment Prize ($50), July 2019, for “comments on the impact of corporate cage-free campaigns” (11)
Onicah Ntswejakgosi
- Meta or Community Building - 2023 Q2 - Project - 3 EA women collaborated at CEEALAR to co-work on building a global support network for EA Women [10%]
Peter Barnett
- AI Alignment - 2022 Q1 - Placement - Internship - Accepted for an internship at CHAI
- AI Alignment - 2022 Q1 - Placement - Internship - Accepted into SERI ML Alignment Theory Scholars program
- AI Alignment - 2022 Q1 - Post (EA/LW/AF) - Thoughts on Dangerous Learned Optimization (4)
- AI Alignment - 2022 Q1 - Post (EA/LW/AF) - Alignment Problems All the Way Down (26)
Quinn Dougherty
- AI Alignment - 2021 Q2 - Event (attendance) - AI Safety Camp
- AI Alignment - 2021 Q2 - Placement - Internship - Obtained internship at SERI
- AI Alignment - 2021 Q2 - Podcast - Multi-agent Reinforcement Learning in Sequential Social Dilemmas
- AI Alignment - 2021 Q2 - Post (EA/LW/AF) - High Impact Careers in Formal Verification: Artificial Intelligence (28)
- Meta or Community Building - 2021 Q2 - Event - Organisation - organising an event for AI Philadelphia, inviting anti-aging expert Jack Harley
- Meta or Community Building - 2021 Q2 - Post (EA/LW/AF) - Cliffnotes to Craft of Research parts I, II, and III (7)
- Meta or Community Building - 2021 Q2 - Post (EA/LW/AF) - What am I fighting for? (12)
- X-Risks - 2021 Q2 - Post (EA/LW/AF) - Transcript: EA Philly's Infodemics Event Part 2: Aviv Ovadya (10)
- AI Alignment - 2021 Q1 - Podcast - Technical AI Safety Podcast Episode 2
- AI Alignment - 2021 Q1 - Podcast - Technical AI Safety Podcast
- AI Alignment - 2021 Q1 - Podcast - Technical AI Safety Podcast Episode 1
- AI Alignment - 2021 Q1 - Podcast - Technical AI Safety Podcast Episode 3
Rafe Kennedy
- AI Alignment - 2019 Q4 - Placement - Job - A job at the Machine Intelligence Research Institute (MIRI) (following a 3 month work trial)
RAISE
- AI Alignment - 2019 Q1 - Course Module produced - Nearly the entirety of this online course was created by grantees
- AI Alignment - 2019 Q1 - Course Module produced - Nearly the entirety of this online course was created by grantees
- AI Alignment - 2019 Q1 - Course Module produced - Nearly the entirety of this online course was created by grantees
Ramika Prajapati
- Animal Welfare - 2023 Q2 - Project - Signed a contract to author a Scientific Policy Report for Rethink Priorities' Insect Welfare Institute, to be presented to the UK Government in 2023 [20%]
- Meta or Community Building - 2023 Q2 - Project - 3 EA women collaborated at CEEALAR to co-work on building a global support network for EA Women [10%]
Rhys Southan
- Animal Welfare - 2020 Q3 - Placement - PhD - Applied to a number of PhD programs in Philosophy and took up a place at Oxford University (Researching “Personal Identity, Value, and Ethics for Animals and AIs”) [40%]
- Animal Welfare - 2020 Q1 - Other writing - Talk - Accepted to give a talk and a poster to the academic session of EAG 2020 in San Francisco
- Animal Welfare - 2019 Q4 - Paper (Submitted) - Wrote an academic philosophy essay about a problem for David Benatar’s pessimism about life and death, and submitted it to an academic journal. [10%]
- Animal Welfare - 2019 Q4 - Placement - Job - “I got a paid job writing an index for a book by a well-known moral philosopher. This job will help me continue to financially contribute to the EA Hotel [CEEALAR].” [20%]
- Animal Welfare - 2019 Q3 - Other writing - Essay - Published an essay, Re-Orientation, about some of the possible personal and societal implications of sexual orientation conversion drugs that actually work [90%]
- Animal Welfare - 2019 Q3 - Placement - Job - Edited and partially rewrote a book on meat, treatment of farmed animals, and alternatives to factory farming (as a paid job [can’t yet name the book or its author due to non-disclosure agreement]) [70%]
Samuel Knoche
- AI Alignment - 2022 Q2 - Placement - Job - Becoming a Contractor for Open AI
- AI Alignment - 2021 Q2 - Code - Creating code [1%]
- Meta or Community Building - 2020 Q3 - Other writing - Blog - Blogs I’ve Been Reading [1%]
- Meta or Community Building - 2020 Q3 - Other writing - Blog - Books I’ve Been Reading [1%]
- Meta or Community Building - 2020 Q3 - Other writing - Blog - List of Peter Thiel’s Online Writings [5%]
- Meta or Community Building - 2020 Q3 - Post (EA/LW/AF) - The Best Educational Institution in the World [1%] (12)
- Meta or Community Building - 2020 Q3 - Post (EA/LW/AF) - The Case for Education [1%] (23)
- Meta or Community Building - 2020 Q2 - Other writing - Blog - Students are Employees [1%]
- Meta or Community Building - 2020 Q2 - Other writing - Blog - My Quarantine Reading List [1%]
- Meta or Community Building - 2020 Q2 - Other writing - Blog - The End of Education [1%]
- Meta or Community Building - 2020 Q2 - Other writing - Blog - How Steven Pinker Could Become Really Rich [1%]
- Meta or Community Building - 2020 Q2 - Other writing - Blog - The Public Good of Education [1%]
- Meta or Community Building - 2020 Q2 - Other writing - Blog - Trillion Dollar Bills on the Sidewalk [1%]
- Meta or Community Building - 2020 Q2 - Other writing - Blog - On Schools and Churches [1%]
- Meta or Community Building - 2020 Q2 - Other writing - Blog - Questions to Guide Life and Learning [1%]
- Meta or Community Building - 2020 Q2 - Other writing - Blog - Online Standardized Tests Are a Bad Idea [1%]
- Meta or Community Building - 2020 Q2 - Other writing - Blog - Harari on Religions and Ideologies [1%]
- Meta or Community Building - 2020 Q2 - Other writing - Blog - The Single Best Policy to Combat Climate Change [1%]
- Meta or Community Building - 2020 Q2 - Post (EA/LW/AF) - Patrick Collison on Effective Altruism [1%] (97)
- Meta or Community Building - 2020 Q1 - Other writing - Blog - What I Learned Dropping Out of High School [1%]
- AI Alignment - 2019 Q4 - Code - Code for Style Transfer, Deep Dream and Pix2Pix implementation [5%]
- AI Alignment - 2019 Q4 - Code - NLP implementations [5%]
- AI Alignment - 2019 Q4 - Code - Code for lightweight Python deep learning library [5%]
- Meta or Community Building - 2019 Q4 - Code - Lottery Ticket Hypothesis [5%]
Saulius Šimčikas
- Animal Welfare - 2019 Q1 - Post (EA/LW/AF) - Will companies meet their animal welfare commitments? (winner of 3rd place EA Forum Prize for Feb 2019) [96%] (115)
- Animal Welfare - 2019 Q1 - Post (EA/LW/AF) - Rodents farmed for pet snake food [99%] (75)
Severin Seehrich
- Meta or Community Building - 2022 Q1 - Event - Organisation - Organised Summer Solstice Celebration event for EAs
- Meta or Community Building - 2022 Q1 - Post (EA/LW/AF) - The Berlin Hub: Longtermist co-living space (plan) (84)
- Meta or Community Building - 2022 Q1 - Project - Designed The Berlin Longtermist Hub
Simmo Simpson
- Meta or Community Building - 2022 Q1 - Placement - Job - Job placement as Executive Assistant to COO of Alvea.
- Meta or Community Building - 2021 Q4 - Course - Became a Association of Coaching accredited coach
Theo Knopfer
- X-Risks - 2022 Q2 - Course - Accepted into CHERI Summer Fellowship
- X-Risks - 2022 Q2 - Event - Organisation - A grant from CEA to organise retreats on x-risk in France
- AI Alignment - 2022 Q1 - Course - AGISF
- X-Risks - 2022 Q1 - Course - https://www.trainingforgood.com/policy-careers-europe
- X-Risks - 2022 Q1 - Course - Weapons of Mass Destruction by IEGA
Toon Alfrink
- Meta or Community Building - 2019 Q1 - Post (EA/LW/AF) - EA is vetting-constrained [10%] (129)
- Meta or Community Building - 2019 Q1 - Post (EA/LW/AF) - Task Y: representing EA in your field [90%] (11)
- Meta or Community Building - 2019 Q1 - Post (EA/LW/AF) - What makes a good culture? [90%] (29)
- Meta or Community Building - 2019 Q1 - Post (EA/LW/AF) - The Home Base of EA [90%]
- Meta or Community Building - 2018 Q4 - Post (EA/LW/AF) - The housekeeper [10%] (23)
- Meta or Community Building - 2018 Q4 - Post (EA/LW/AF) - We can all be high status [10%] (62)
Vinay Hiremath
- AI Alignment - 2023 Q1 - Placement - Job - Selected as a Teaching Assistant for the ML for Alignment Bootcamp
- AI Alignment - 2022 Q2 - Placement - Job - Job placement as Teaching Assistant for MLAB (Machine Learning for Alignment Bootcamp)
- AI Alignment - 2022 Q1 - Course - AGISF
- Meta or Community Building - 2022 Q1 - Project - Built website for SERI conference
Code
Course
Course Module produced
Event - AI Safety
Event (attendance)
Event - Organisation
Event - Rationality Workshop
Event - Retreat
Event - X-risk
Idea
Key:
- Cause Area - Quarter - Name of Grantee - Title with link[C%*] (K**)
*C% = percentage counterfactual likelihood of happening without CEEALAR.
**K = Karma on EA Forum, Less Wrong, (Less Wrong; Alignment Forum).
Code
- AI Alignment - 2023 Q1 - Can Rager - Prepared for and conducted the coding test for the ARENA programme
- AI Alignment - 2021 Q4 - Jaeson Booker - AI Policy Simulator
- AI Alignment - 2021 Q2 - Samuel Knoche - Creating code [1%]
- AI Alignment - 2019 Q4 - Samuel Knoche - Code for Style Transfer, Deep Dream and Pix2Pix implementation [5%]
- AI Alignment - 2019 Q4 - Samuel Knoche - NLP implementations [5%]
- AI Alignment - 2019 Q4 - Samuel Knoche - Code for lightweight Python deep learning library [5%]
- Meta or Community Building - 2019 Q4 - Samuel Knoche - Lottery Ticket Hypothesis [5%]
- Meta or Community Building - 2019 Q1 - Matt Goldenberg - The entirety of Project Metis [5%]
Course
- AI Alignment - 2023 Q2 - Chris Leong - Got accepted to the online training + research sprint component of John Wentworth's SERI MATS stream
- AI Alignment - 2023 Q2 - Jaeson Booker - Got accepted to the online training + research sprint component of John Wentworth's SERI MATS stream
- AI Alignment - 2023 Q2 - Michele Campolo - Got accepted to the online training + research sprint component of John Wentworth's SERI MATS stream
- AI Alignment - 2023 Q1 - Guillaume Corlouer - Got selected at the PIBBSS Fellowship
- AI Alignment - 2022 Q3 - David Matlosci - got selected to the second phase of SERI MATS and went to California to participate in it
- AI Alignment - 2022 Q2 - Aaron Maiwald - courses on mathematics part of his bachelor's degree in Cognitive Science
- AI Alignment - 2022 Q2 - Aaron Maiwald - courses on ML part of his bachelor's degree in Cognitive Science
- Meta or Community Building - 2022 Q2 - Luminita Bogatean - First out of three courses in the UX Programme by CareerFoundry
- X-Risks - 2022 Q2 - Theo Knopfer - Accepted into CHERI Summer Fellowship
- AI Alignment - 2022 Q1 - David King - AGISF
- AI Alignment - 2022 Q1 - Jaeson Booker - AGISF
- AI Alignment - 2022 Q1 - Michele Campolo - Attended online course on moral psychology by the University of Warwick.
- AI Alignment - 2022 Q1 - Theo Knopfer - AGISF
- AI Alignment - 2022 Q1 - Vinay Hiremath - AGISF
- X-Risks - 2022 Q1 - Theo Knopfer - https://www.trainingforgood.com/policy-careers-europe
- X-Risks - 2022 Q1 - Theo Knopfer - Weapons of Mass Destruction by IEGA
- Meta or Community Building - 2021 Q4 - Simmo Simpson - Became a Association of Coaching accredited coach
- Global Health and Development - 2021 Q3 - Nick Stares - CORE The Economy
- Meta or Community Building - 2021 Q2 - Denisa Pop - Introduction into Group Facilitation
- AI Alignment - 2020 Q3 - Luminita Bogatean - enrolled in the Open University’s bachelor degree Computing & IT and Design [20%]
- Global Health and Development - 2020 Q3 - Kris Gulati - Audited M208 (Pure Maths) Linear Algebra and Real Analysis, The Open University
- Global Health and Development - 2020 Q1 - Kris Gulati - Distinctions in M140 (Statistics), The Open University
- Global Health and Development - 2020 Q1 - Kris Gulati - Distinctions in MST125 (Mathematics), The Open University
- AI Alignment - 2019 Q4 - Luminita Bogatean - Course: Python Programming: A Concise Introduction [20%]
- Global Health and Development - 2019 Q4 - Kris Gulati - Completed MA100 (Mathematical Methods)[auditing module], London School of Economics
- Global Health and Development - 2019 Q4 - Kris Gulati - Completed GV100 (Intro to Political Theory)[auditing module], London School of Economics
- Global Health and Development - 2019 Q4 - Kris Gulati - Completed ‘Justice’ (Harvard MOOC; Verified Certificate)
- Global Health and Development - 2019 Q3 - Kris Gulati - Distinction in MU123 (Mathematics), The Open University
- Global Health and Development - 2019 Q3 - Kris Gulati - Distinctions in MST124 (Mathematics), The Open University
- AI Alignment - 2019 Q1 - Anonymous 1 - MITx Probability
- AI Alignment - 2019 Q1 - Anonymous 1 - Model Thinking
- AI Alignment - 2019 Q1 - Anonymous 1 - Probabilistic Graphical Models
- AI Alignment - 2019 Q1 - John Maxwell - MITx Probability
- AI Alignment - 2019 Q1 - John Maxwell - Improving Your Statistical Inferences (21 hours)
- AI Alignment - 2019 Q1 - John Maxwell - Statistical Learning
- AI Alignment - 2018 Q4 - John Maxwell - ARIMA Modeling with R
- AI Alignment - 2018 Q4 - John Maxwell - Introduction to Recommender Systems (20-48 hours)
- AI Alignment - 2018 Q4 - John Maxwell - Formal Software Verification
- AI Alignment - 2018 Q4 - John Maxwell - Text Mining and Analytics
- AI Alignment - 2018 Q3 - John Maxwell - Introduction to Time Series Analysis
- AI Alignment - 2018 Q3 - John Maxwell - Regression Models
Course Module produced
- AI Alignment - 2019 Q1 - RAISE - Nearly the entirety of this online course was created by grantees
- AI Alignment - 2019 Q1 - RAISE - Nearly the entirety of this online course was created by grantees
- AI Alignment - 2019 Q1 - RAISE - Nearly the entirety of this online course was created by grantees
Event - AI Safety
- Meta or Community Building - 2023 Q1 - Can Rager - Conducted online training for software engineering and mechanistic interpretability
- AI Alignment - 2019 Q4 - [multiple people] - AI Safety Learning By Doing Workshop (October 2019)
- AI Alignment - 2019 Q3 - [multiple people] - AI Safety Learning By Doing Workshop (August 2019)
- AI Alignment - 2019 Q3 - [multiple people] - AI Safety Technical Unconference (August 2019) (retrospective written by a participant)
Event (attendance)
- AI Alignment - 2021 Q3 - Michele Campolo - Attended Good AI Badger Seminar: Beyond Life-long Learning via Modular Meta-Learning
- AI Alignment - 2021 Q2 - Quinn Dougherty - AI Safety Camp
Event - Organisation
- AI Alignment - 2023 Q2 - Nia Gardner - Organised an ML Bootcamp in Germany
- X-Risks - 2022 Q2 - Theo Knopfer - A grant from CEA to organise retreats on x-risk in France
- Meta or Community Building - 2022 Q1 - Severin Seehrich - Organised Summer Solstice Celebration event for EAs
- Meta or Community Building - 2021 Q2 - Quinn Dougherty - organising an event for AI Philadelphia, inviting anti-aging expert Jack Harley
- AI Alignment - 2019 Q4 - Linda Linsefors - Organized the AI Safety Learning By Doing Workshop (August and October 2019)
- X-Risks - 2019 Q4 - David Kristoffersson - Organised AI Strategy and X-Risk Unconference (AIXSU) [1%] (5)
- AI Alignment - 2019 Q3 - Linda Linsefors - Organized the AI Safety Technical Unconference (August 2019) (retrospective written by a participant) (36)
- AI Alignment - 2019 Q3 - Linda Linsefors - Organized the AI Safety Learning By Doing Workshop (August and October 2019)
- Meta or Community Building - 2019 Q2 - Matt Goldenberg - Organizer and instructor for the Athena Rationality Workshop (June 2019)
- Meta or Community Building - 2019 Q1 - Denisa Pop - Helped organise the EA Values-to-Actions Retreat [33%]
- Meta or Community Building - 2019 Q1 - Denisa Pop - Helped organise the EA Community Health Unconference [33%]
Event - Rationality Workshop
- Meta or Community Building - 2019 Q2 - [multiple people] - Athena Rationality Workshop (June 2019) (retrospective) (24)
Event - Retreat
- Meta or Community Building - 2023 Q2 - [multiple people] - Hosted ALLFED team retreat
- Meta or Community Building - 2019 Q1 - [multiple people] - EA Glasgow (March 2019)
- Meta or Community Building - 2018 Q3 - [multiple people] - EA London Retreats: Life Review Weekend (Aug. 24th – 27th 2018); Careers Week (Aug. 27th – 31st 2018); Holiday/EA Unconference (Aug. 31st – Sept. 3rd 2018)
Event - X-risk
- AI Alignment - 2019 Q4 - [multiple people] - AI Strategy and X-Risk Unconference (AIXSU)
Idea
- Animal Welfare - 2019 Q3 - Magnus Vinding - “I got the idea to write the book I’m currently writing (“Suffering-Focused Ethics”)”. [50%]
Other writing - Blog
- X-Risks - 2023 Q2 - Jaeson Booker - Moloch, Disaster Monkeys, and Gremlins- Blog Post to AI-safety, X-risks
- X-Risks - 2023 Q2 - Jaeson Booker - The Amoral God-King- Blog Post to AI-safety, X-risks
- X-Risks - 2023 Q2 - Jaeson Booker - The Three Heads of the Dragon- Blog Post to AI-safety, X-risks
- Meta or Community Building - 2020 Q3 - Samuel Knoche - Blogs I’ve Been Reading [1%]
- Meta or Community Building - 2020 Q3 - Samuel Knoche - Books I’ve Been Reading [1%]
- Meta or Community Building - 2020 Q3 - Samuel Knoche - List of Peter Thiel’s Online Writings [5%]
- Meta or Community Building - 2020 Q2 - Samuel Knoche - Students are Employees [1%]
- Meta or Community Building - 2020 Q2 - Samuel Knoche - My Quarantine Reading List [1%]
- Meta or Community Building - 2020 Q2 - Samuel Knoche - The End of Education [1%]
- Meta or Community Building - 2020 Q2 - Samuel Knoche - How Steven Pinker Could Become Really Rich [1%]
- Meta or Community Building - 2020 Q2 - Samuel Knoche - The Public Good of Education [1%]
- Meta or Community Building - 2020 Q2 - Samuel Knoche - Trillion Dollar Bills on the Sidewalk [1%]
- Meta or Community Building - 2020 Q2 - Samuel Knoche - On Schools and Churches [1%]
- Meta or Community Building - 2020 Q2 - Samuel Knoche - Questions to Guide Life and Learning [1%]
- Meta or Community Building - 2020 Q2 - Samuel Knoche - Online Standardized Tests Are a Bad Idea [1%]
- Meta or Community Building - 2020 Q2 - Samuel Knoche - Harari on Religions and Ideologies [1%]
- Meta or Community Building - 2020 Q2 - Samuel Knoche - The Single Best Policy to Combat Climate Change [1%]
- Meta or Community Building - 2020 Q1 - Samuel Knoche - What I Learned Dropping Out of High School [1%]
Other writing - Book Chapter
- AI Alignment - 2023 Q1 - Jaeson Booker - Completed authoring two-third of the AI Safety handbook [70%]
- Global Health and Development - 2019 Q1 - Derek Foster - Priority Setting in Healthcare Through the Lens of Happiness – Chapter 3 of the 2019 Global Happiness & Wellbeing Policy Report [99%]
Other writing - Comment
- Animal Welfare - 2019 Q3 - Nix Goldowsky-Dill - EA Forum Comment Prize ($50), July 2019, for “comments on the impact of corporate cage-free campaigns” (11)
Other writing - Essay
- Animal Welfare - 2019 Q3 - Rhys Southan - Published an essay, Re-Orientation, about some of the possible personal and societal implications of sexual orientation conversion drugs that actually work [90%]
- Animal Welfare - 2018 Q3 - Magnus Vinding - Why Altruists Should Perhaps Not Prioritize Artificial Intelligence: A Lengthy Critique [99%]
Other writing - Project
- Global Health and Development - 2020 Q2 - Derek Foster - Some of the content of https://1daysooner.org/ [70%]
- Global Health and Development - 2020 Q2 - Derek Foster - Various unpublished documents for the Happier Lives Institute [80%]
- X-Risks - 2020 Q2 - Aron Mill - Helped Launch the Food Systems Handbook (announcement) (30)
- Global Health and Development - 2020 Q1 - Derek Foster - A confidential evaluation of an anxiety app [90%]
Other writing - Report
- AI Alignment - 2023 Q1 - Can Rager - Developed a research proposal
- Global Health and Development - 2020 Q2 - Derek Foster - Some confidential COVID-19-related policy reports [70%]
Other writing - Review
- AI Alignment - 2023 Q2 - Jaeson Booker - Reviewed Policy Report to AIS-safety
Other writing - Talk
- Animal Welfare - 2020 Q1 - Rhys Southan - Accepted to give a talk and a poster to the academic session of EAG 2020 in San Francisco
- Meta or Community Building - 2019 Q2 - Denisa Pop - Researched and developing presentations and workshops in Rational Compassion: see How we might save the world by becoming super-dogs [0%]
- X-Risks - 2019 Q1 - David Kristoffersson - Designed Convergence presentation (slides, notes) and held it at the Future of Humanity Institute [80%]
Paper (Preprint)
- Global Health and Development - 2020 Q3 - Derek Foster - Option-based guarantees to accelerate urgent, high risk vaccines: A new market-shaping approach [Preprint] [70%]
- Global Health and Development - 2020 Q2 - Derek Foster - Modelling the Health and Economic Impacts of Population-Wide Testing, Contact Tracing and Isolation (PTTI) Strategies for COVID-19 in the UK [Preprint]
Paper (Published)
- Global Health and Development - 2020 Q3 - Derek Foster - Evaluating use cases for human challenge trials in accelerating SARS-CoV-2 vaccine development. Clinical Infectious Diseases. [70%]
- Global Health and Development - 2019 Q4 - Anders Huitfeldt - Scientific Article: Huitfeldt, A., Swanson, S. A., Stensrud, M. J., & Suzuki, E. (2019). Effect heterogeneity and variable selection for standardizing causal effects to a target population. European Journal of Epidemiology.
- X-Risks - 2019 Q4 - Markus Salmela - Joined the design team for the upcoming AI Strategy role-playing game Intelligence Rising and organised a series of events for testing the game [15%]
- AI Alignment - 2019 Q3 - Davide Zagami - Coauthored the paper Categorizing Wireheading in Partially Embedded Agents, and presented a poster at the AI Safety Workshop in IJCAI 2019 [15%]
- X-Risks - 2019 Q1 - Markus Salmela - Coauthored the paper Long-Term Trajectories of Human Civilization [99%]
Paper (Revising)
- Animal Welfare - 2019 Q3 - Magnus Vinding - Revising journal paper for Between the Species. (“Got feedback and discussion about it I couldn’t have had otherwise; one reviewer happened to be a guest at the hotel [CEEALAR].”)
Paper (Submitted)
- Animal Welfare - 2019 Q4 - Rhys Southan - Wrote an academic philosophy essay about a problem for David Benatar’s pessimism about life and death, and submitted it to an academic journal. [10%]
Placement - Internship
- AI Alignment - 2022 Q2 - Lucas Teixeira - Internship at Conjecture
- AI Alignment - 2022 Q1 - Peter Barnett - Accepted for an internship at CHAI
- AI Alignment - 2022 Q1 - Peter Barnett - Accepted into SERI ML Alignment Theory Scholars program
- AI Alignment - 2021 Q2 - Quinn Dougherty - Obtained internship at SERI
- Meta or Community Building - 2020 Q3 - Denisa Pop - Incubatee and graduate of Charity Entrepreneurship 2020 Incubation Program [50%]
- Meta or Community Building - 2020 Q2 - Denisa Pop - Interned as a mental health research analyst at Charity Entrepreneurship [50%]
- Animal Welfare - 2018 Q4 - Frederik Bechtold - Received an (unpaid) internship at Animal Ethics [1%]
Placement - Job
- AI Alignment - 2023 Q1 - Vinay Hiremath - Selected as a Teaching Assistant for the ML for Alignment Bootcamp
- AI Alignment - 2022 Q2 - Samuel Knoche - Becoming a Contractor for Open AI
- AI Alignment - 2022 Q2 - Vinay Hiremath - Job placement as Teaching Assistant for MLAB (Machine Learning for Alignment Bootcamp)
- Meta or Community Building - 2022 Q2 - Laura C - Becoming Operations Lead for The Berlin Longtermist Hub [50%]
- Meta or Community Building - 2022 Q1 - Simmo Simpson - Job placement as Executive Assistant to COO of Alvea.
- Meta or Community Building - 2021 Q2 - Anonymous - Obtained job for the EA org Momentum
- Meta or Community Building - 2020 Q2 - Luminita Bogatean - Becoming Operations Manager at CEEALAR
- AI Alignment - 2019 Q4 - Rafe Kennedy - A job at the Machine Intelligence Research Institute (MIRI) (following a 3 month work trial)
- Animal Welfare - 2019 Q4 - Rhys Southan - “I got a paid job writing an index for a book by a well-known moral philosopher. This job will help me continue to financially contribute to the EA Hotel [CEEALAR].” [20%]
- Animal Welfare - 2019 Q3 - Rhys Southan - Edited and partially rewrote a book on meat, treatment of farmed animals, and alternatives to factory farming (as a paid job [can’t yet name the book or its author due to non-disclosure agreement]) [70%]
- Global Health and Development - 2018 Q4 - Derek Foster - Hired as a research analyst for Rethink Priorities [95%]
- Animal Welfare - 2018 Q3 - Max Carpendale - Got a research position (part-time) at Animal Ethics [25%]
Placement - PhD
- Animal Welfare - 2020 Q3 - Rhys Southan - Applied to a number of PhD programs in Philosophy and took up a place at Oxford University (Researching “Personal Identity, Value, and Ethics for Animals and AIs”) [40%]
- Global Health and Development - 2020 Q2 - Kris Gulati - Applied to a number of PhD programmes in Economics, and took up a place at Glasgow University
Podcast
- Meta or Community Building - 2021 Q4 - Aaron Maiwald - 70% of the work for the Gutes Einfach Tun Podcast Episodes : 3, 4 ,5 and 6
- Meta or Community Building - 2021 Q3 - Aaron Maiwald - Gutes Einfach Tun Podcast lauch & Episode 1
- Meta or Community Building - 2021 Q3 - Aaron Maiwald - Gutes Einfach Tun Podcast lauch & Episode 2
- AI Alignment - 2021 Q2 - Quinn Dougherty - Multi-agent Reinforcement Learning in Sequential Social Dilemmas
- AI Alignment - 2021 Q1 - Quinn Dougherty - Technical AI Safety Podcast Episode 2
- AI Alignment - 2021 Q1 - Quinn Dougherty - Technical AI Safety Podcast
- AI Alignment - 2021 Q1 - Quinn Dougherty - Technical AI Safety Podcast Episode 1
- AI Alignment - 2021 Q1 - Quinn Dougherty - Technical AI Safety Podcast Episode 3
Post (EA/LW/AF)
- AI Alignment - 2023 Q2 - Can Rager - Written a post on Understanding mesa-optimisation using toy models (37)
- AI Alignment - 2023 Q1 - Michele Campolo - On value in humans, other animals, and AI (7; 3)
- AI Alignment - 2022 Q2 - Michele Campolo - Some alternative AI safety research projects [10%] (8; 2)
- AI Alignment - 2022 Q1 - Peter Barnett - Thoughts on Dangerous Learned Optimization (4)
- AI Alignment - 2022 Q1 - Peter Barnett - Alignment Problems All the Way Down (26)
- Meta or Community Building - 2022 Q1 - Severin Seehrich - The Berlin Hub: Longtermist co-living space (plan) (84)
- AI Alignment - 2021 Q4 - Charlie Steiner - Reducing Goodhart sequence (115; 71)
- AI Alignment - 2021 Q4 - Michele Campolo - From language to ethics by automated reasoning (8; 2)
- AI Alignment - 2021 Q2 - Michele Campolo - Naturalism and AI alignment (17; 5)
- AI Alignment - 2021 Q2 - Quinn Dougherty - High Impact Careers in Formal Verification: Artificial Intelligence (28)
- Meta or Community Building - 2021 Q2 - Quinn Dougherty - Cliffnotes to Craft of Research parts I, II, and III (7)
- Meta or Community Building - 2021 Q2 - Quinn Dougherty - What am I fighting for? (12)
- X-Risks - 2021 Q2 - Quinn Dougherty - Transcript: EA Philly's Infodemics Event Part 2: Aviv Ovadya (10)
- AI Alignment - 2021 Q1 - Michele Campolo - Contribution to Literature Review on Goal-Directedness [20%] (71; 32)
- AI Alignment - 2020 Q3 - Michele Campolo - Decision Theory is multifaceted [25%] (9; 7)
- AI Alignment - 2020 Q3 - Michele Campolo - Goals and short descriptions [25%] (14; 7)
- AI Alignment - 2020 Q3 - Michele Campolo - Postponing research can sometimes be the optimal decision [25%] (29)
- Meta or Community Building - 2020 Q3 - Samuel Knoche - The Best Educational Institution in the World [1%] (12)
- Meta or Community Building - 2020 Q3 - Samuel Knoche - The Case for Education [1%] (23)
- X-Risks - 2020 Q3 - David Kristoffersson - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q3 - David Kristoffersson - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q3 - David Kristoffersson - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q3 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q3 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q3 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q3 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q3 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q3 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- AI Alignment - 2020 Q2 - Michele Campolo - Contributions to the sequence Thoughts on Goal-Directedness [25%] (97; 44)
- Global Health and Development - 2020 Q2 - Derek Foster - Pueyo: How to Do Testing and Contact Tracing [Summary] [70%] (7)
- Global Health and Development - 2020 Q2 - Derek Foster - Market-shaping approaches to accelerate COVID-19 response: a role for option-based guarantees? [70%] (38)
- Meta or Community Building - 2020 Q2 - Samuel Knoche - Patrick Collison on Effective Altruism [1%] (97)
- X-Risks - 2020 Q2 - Aron Mill - Food Crisis - Cascading Events from COVID-19 & Locusts (97)
- X-Risks - 2020 Q2 - David Kristoffersson - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q2 - David Kristoffersson - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q2 - David Kristoffersson - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q2 - David Kristoffersson - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q2 - David Kristoffersson - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q2 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q2 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q2 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q2 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q2 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q2 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- AI Alignment - 2020 Q1 - Michele Campolo - Wireheading and discontinuity [25%] (21; 11)
- X-Risks - 2020 Q1 - David Kristoffersson - State Space of X-Risk Trajectories [95%] (24)
- X-Risks - 2020 Q1 - David Kristoffersson - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q1 - David Kristoffersson - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q1 - David Kristoffersson - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q1 - David Kristoffersson - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q1 - David Kristoffersson - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q1 - David Kristoffersson - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q1 - David Kristoffersson - The ‘far future’ is not just the far future [99%] (30)
- X-Risks - 2020 Q1 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q1 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q1 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q1 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q1 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q1 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q1 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q1 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q1 - Michael Aird - Four components of strategy research [50%] (26)
- X-Risks - 2020 Q1 - Michael Aird - Using vector fields to visualise preferences and make them consistent [90%] (41; 8)
- X-Risks - 2020 Q1 - Michael Aird - Value uncertainty [95%] (19)
- AI Alignment - 2019 Q4 - Anonymous 2 - Some Comments on “Goodhart Taxonomy” [1%] (9; 4)
- AI Alignment - 2019 Q4 - Anonymous 2 - What are we assuming about utility functions? [1%] (17; 9)
- AI Alignment - 2019 Q4 - Anonymous 2 - Critiquing “What failure looks like” (featured in MIRI’s Jan 2020 Newsletter) [1%] (35; 17)
- AI Alignment - 2019 Q4 - Anonymous 2 - 8 AIS ideas [1%]
- AI Alignment - 2019 Q4 - Michele Campolo - Thinking of tool AIs [0%] (6)
- Global Health and Development - 2019 Q4 - Anders Huitfeldt - Post on EA Forum: Effect heterogeneity and external validity (6)
- Global Health and Development - 2019 Q4 - Anders Huitfeldt - Post on LessWrong: Effect heterogeneity and external validity in medicine (49)
- AI Alignment - 2019 Q3 - Anonymous 2 - Cognitive Dissonance and Veg*nism (7)
- AI Alignment - 2019 Q3 - Anonymous 2 - Non-anthropically, what makes us think human-level intelligence is possible? (9)
- AI Alignment - 2019 Q3 - Anonymous 2 - What are concrete examples of potential “lock-in” in AI research? (17; 11)
- AI Alignment - 2019 Q3 - Anonymous 2 - Distance Functions are Hard (31; 11)
- AI Alignment - 2019 Q3 - Anonymous 2 - The Moral Circle is not a Circle (38)
- Animal Welfare - 2019 Q3 - Max Carpendale - Interview with Michael Tye about invertebrate consciousness [50%] (32)
- Animal Welfare - 2019 Q3 - Max Carpendale - My recommendations for gratitude exercises [50%] (40)
- Global Health and Development - 2019 Q3 - Derek Foster - Rethink Grants: an evaluation of Donational’s Corporate Ambassador Program [95%] (54)
- Animal Welfare - 2019 Q2 - Max Carpendale - My recommendations for RSI treatment [25%] (77)
- Animal Welfare - 2019 Q2 - Max Carpendale - Thoughts on the welfare of farmed insects [50%] (34)
- Animal Welfare - 2019 Q2 - Max Carpendale - Interview with Shelley Adamo about invertebrate consciousness [50%] (37)
- Animal Welfare - 2019 Q2 - Max Carpendale - Interview with Jon Mallatt about invertebrate consciousness (winner of 1st place EA Forum Prize for Apr 2019) [50%] (83)
- AI Alignment - 2019 Q1 - Chris Leong - Deconfusing Logical Counterfactuals [75%] (27; 6)
- AI Alignment - 2019 Q1 - Chris Leong - Debate AI and the Decision to Release an AI [90%] (9)
- AI Alignment - 2019 Q1 - Davide Zagami - RAISE lessons on Inverse Reinforcement Learning + their Supplementary Material [1%] (20)
- AI Alignment - 2019 Q1 - Davide Zagami - RAISE lessons on Fundamentals of Formalization [5%] (28)
- AI Alignment - 2019 Q1 - Linda Linsefors - The Game Theory of Blackmail (“I don’t remember where the ideas behind this post came from, so it is hard for me to say what the counterfactual would have been. However, I did get help improving the post from other residents, so it would at least be less well written without the hotel.“) (25; 6)
- AI Alignment - 2019 Q1 - Linda Linsefors - Optimization Regularization through Time Penalty (“This post resulted from conversations at the EA Hotel [CEEALAR] and would not therefore not have happened without the hotel.”) [0%] (11; 7)
- Animal Welfare - 2019 Q1 - Max Carpendale - Sharks probably do feel pain: a reply to Michael Tye and others [50%] (21)
- Animal Welfare - 2019 Q1 - Max Carpendale - The Evolution of Sentience as a Factor in the Cambrian Explosion: Setting up the Question [50%] (30)
- Animal Welfare - 2019 Q1 - Saulius Šimčikas - Will companies meet their animal welfare commitments? (winner of 3rd place EA Forum Prize for Feb 2019) [96%] (115)
- Animal Welfare - 2019 Q1 - Saulius Šimčikas - Rodents farmed for pet snake food [99%] (75)
- Meta or Community Building - 2019 Q1 - Matt Goldenberg - What Vibing Feels Like [5%] (19)
- Meta or Community Building - 2019 Q1 - Matt Goldenberg - A Framework for Internal Debugging [5%] (41)
- Meta or Community Building - 2019 Q1 - Matt Goldenberg - How to Understand and Mitigate Risk [5%] (55)
- Meta or Community Building - 2019 Q1 - Matt Goldenberg - S-Curves for Trend Forecasting [5%] (112)
- Meta or Community Building - 2019 Q1 - Matt Goldenberg - The 3 Books Technique for Learning a New Skill [5%] (193)
- Meta or Community Building - 2019 Q1 - Toon Alfrink - EA is vetting-constrained [10%] (129)
- Meta or Community Building - 2019 Q1 - Toon Alfrink - Task Y: representing EA in your field [90%] (11)
- Meta or Community Building - 2019 Q1 - Toon Alfrink - What makes a good culture? [90%] (29)
- Meta or Community Building - 2019 Q1 - Toon Alfrink - The Home Base of EA [90%]
- AI Alignment - 2018 Q4 - Anonymous 1 - Believing others’ priors (8)
- AI Alignment - 2018 Q4 - Anonymous 1 - AI development incentive gradients are not uniformly terrible (21)
- AI Alignment - 2018 Q4 - Anonymous 1 - Should donor lottery winners write reports? (29)
- AI Alignment - 2018 Q4 - Chris Leong - On Abstract Systems [50%] (14)
- AI Alignment - 2018 Q4 - Chris Leong - On Disingenuity [50%] (28)
- AI Alignment - 2018 Q4 - Chris Leong - Summary: Surreal Decisions [50%] (29)
- AI Alignment - 2018 Q4 - Chris Leong - An Extensive Categorisation of Infinite Paradoxes [80%] [80%] (-14)
- Animal Welfare - 2018 Q4 - Max Carpendale - Why I’m focusing on invertebrate sentience [75%] (56)
- Meta or Community Building - 2018 Q4 - Toon Alfrink - The housekeeper [10%] (23)
- Meta or Community Building - 2018 Q4 - Toon Alfrink - We can all be high status [10%] (62)
- AI Alignment - 2018 Q3 - Anonymous 1 - Annihilating aliens & Rare Earth suggest early filter (9)
Project
- Animal Welfare - 2023 Q2 - Ramika Prajapati - Signed a contract to author a Scientific Policy Report for Rethink Priorities' Insect Welfare Institute, to be presented to the UK Government in 2023 [20%]
- Meta or Community Building - 2023 Q2 - Daniela Tiznado - 3 EA women collaborated at CEEALAR to co-work on building a global support network for EA Women [10%]
- Meta or Community Building - 2023 Q2 - Onicah Ntswejakgosi - 3 EA women collaborated at CEEALAR to co-work on building a global support network for EA Women [10%]
- Meta or Community Building - 2023 Q2 - Ramika Prajapati - 3 EA women collaborated at CEEALAR to co-work on building a global support network for EA Women [10%]
- AI Alignment - 2023 Q1 - Hamish Huggard - Video on AI safety
- AI Alignment - 2023 Q1 - Hamish Huggard - Animated the AXRP video "How do neural networks do modular addition?"
- AI Alignment - 2023 Q1 - Hamish Huggard - Animated the AXRP video "Vanessa Kosoy on the Monotonicity Principle"
- AI Alignment - 2023 Q1 - Hamish Huggard - Animated the AXRP video "What is mechanistic interpretability? Neel Nanda explains"
- AI Alignment - 2023 Q1 - Jaeson Booker - Created AI Safety Strategy Group' Website
- Meta or Community Building - 2023 Q1 - Chris Leong - Worked together with Abram Demski on an Adversarial Collaboration project
- Meta or Community Building - 2023 Q1 - Hamish Huggard - Built and launched https://aisafety.world/
- Meta or Community Building - 2023 Q1 - Hamish Huggard - Video to help an EANZ fundraiser for GiveDirectly
- Meta or Community Building - 2023 Q1 - Jaeson Booker - Created AI Safety Strategy Group and GitHub Project Board [80%]
- Meta or Community Building - 2022 Q1 - Denisa Pop - $9,000 grant from the Effective Altruism Infrastructure Fund to organize group activities for improving self-awareness and communication in EAs [50%]
- Meta or Community Building - 2022 Q1 - Severin Seehrich - Designed The Berlin Longtermist Hub
- Meta or Community Building - 2022 Q1 - Vinay Hiremath - Built website for SERI conference
- Meta or Community Building - 2021 Q2 - Jack Harley - Created Longevity wiki and expanded the team to 8 members
- Global Health and Development - 2020 Q2 - Derek Foster - Parts of the Survey of COVID-19 Responses to Understand Behaviour (SCRUB)
- X-Risks - 2019 Q3 - David Kristoffersson - Applied for 501c3 non-profit status for Convergence [non-profit status approved in 2019] [95%]
- X-Risks - 2019 Q3 - Justin Shovelain - Got non-profit status for Convergence Analysis and established it legally [90%]
- Meta or Community Building - 2019 Q2 - Denisa Pop - Becoming Interim Community & Projects Manager at CEEALAR and offering residents counseling/coaching sessions (productivity & mental health) [0%]
- X-Risks - 2019 Q1 - David Kristoffersson - Defined a recruitment plan for a researcher-writer role and publicized a job ad [90%]
- X-Risks - 2019 Q1 - David Kristoffersson - Built new website for Convergence [90%]
- X-Risks - 2018 Q4 - David Kristoffersson - Incorporated Convergence [95%]
Statement
- Meta or Community Building - 2022 Q4 - CEEALAR - We had little in the way of concrete outputs this quarter due to diminished numbers (building maintenance)
- Meta or Community Building - 2020 Q4 - CEEALAR - We had little in the way of concrete outputs this quarter due to diminished numbers (pandemic lockdown)
- Global Health and Development - 2020 Q3 - Kris Gulati - “All together I spent approximately 9/10 months in total at the Hotel [CEEALAR] (I had appendicitis and had a few breaks during my stay). The time at the Hotel was incredibly valuable to me. I completed the first year of a Maths degree via The Open University (with Distinction). On top of this, I self-studied Maths and Statistics (a mixture of Open University and MIT Opencourseware resources), covering pre-calculus, single-variable calculus, multivariable calculus, linear algebra, real analysis, probability theory, and statistical theory/applied statistics. This provided me with the mathematics/statistics knowledge to complete the coursework components at top-tier Economics PhD programmes.The Hotel [CEEALAR] also gave me the time to apply for PhD programmes. Sadly, I didn’t succeed in obtaining scholarships for my target school – The London School of Economics. However, I did receive a fully funded offer to study a two-year MRes in Economics at The University of Glasgow. Conditional upon doing well at Glasgow, the two-year MRes enables me to apply to top-tier PhD programmes afterwards. During my stay, I worked on some academic research (my MSc thesis, and an old anthropology paper), which will help my later PhD applications. I applied for a variety of large grants at OpenPhil and other EA organisations (which weren’t successful). I also applied to a fellowship at Wasteland Research (I reached the final round), which I couldn’t follow up on due to other work commitments (although I hope to apply in the future). Finally, I developed a few research ideas while at the Hotel. I’m now working on obtaining data socio-economic data on academic Economists. I’m also planning on running/hosting an experiment that tries to find the most convincing argument for long-termism. These ideas were conceived at the Hotel and I received a lot of feedback/help from current and previous residents. Counterfactually – if I wasn’t at the Hotel [CEEALAR] – I would have probably only been able to complete half of the Maths/Stats I learned. I probably wouldn’t have applied to any of the scholarships/grants/fellowships because I heard about them via residents at the Hotel. I also probably wouldn’t have had time to focus on completing my older research papers. Similarly, discussions with other residents spurred the new research ideas I’m working on.” [0%]
- AI Alignment - 2019 Q3 - Linda Linsefors - “I think the biggest impact EA Hotel did for me, was about self growth. I got a lot of help to improve, but also the time and freedom to explore. I tried some projects that did not lead anywhere, like Code to Give. But getting to explore was necessary for me to figure out what to do. I finally landed on organising, which I’m still doing. AI Safety Support probably would not have existed with out the hotel.” [0%]
Next steps for departing grantees
- Samuel Knoche starts work as a contractor for OpenAI.
- Lucas Teixeira takes up an internship at Conjecture.
- Peter Barnett takes up an internship at the Stanford Existential Risks Initiative.
- Aron Mill enrolls in a Master's program at TU Berlin.
- Quinn Dougherty takes up an internship at the Stanford Existential Risks Initiative.
- Kris Gulati takes up an Economics MRes/PhD at the University of Glasgow.
- Rhys Southan embarks upon a PhD in Philosophy at the University of Oxford.
- Linda Linsefors starts AI Safety Support.
- Davide Zagami goes on to work at AI/computer vision startup HoraVision.
- Chris Leong goes on to work at Triplebyte.
- Ed Wise goes on to complete a Master's in International Relations (cum laude) at Leiden University.
- Rafe Kennedy starts a job at the Machine Intelligence Research Institute (MIRI).
- Hoagy Cunningham took part in the MIRI Summer Fellows program, and goes on to pursue AI policy opportunities as a civil servant in the UK government.
- Patrick Leask begins a PhD in Interpretability at Durham University, and starts work as a Researcher at Leap Labs