Outputs
Last updated Jul 15, 2022
As of July 2022, we have been running for 50 months. Most grantees are at a relatively early stage with their work and careers. Below we have collated a list of outputs that people have volunteered to share thus far.
This should not be interpreted as an exhaustive list of everything of value that the Centre for Enabling EA Learning & Research (CEEALAR) has produced. We have only included things for which the value can be independently verified. This list likely captures less than half of the actual value.
Total expenses
Money: So far ~£321,000* has been spent on hosting our residents, of which ~£41,000 was contributed by residents. Everything below is a result of that funding.
Time: ~18,600 person-days spent at CEEALAR.
Summary of Outputs
- 7 internships and 11 jobs earned at EA organisations; 2 PhD places earned;
- The incubation of 3 EA projects with potential for scaling (including CEEALAR);
- 36 online course modules followed;
- 2.5 online course modules produced;
- 114 posts on the EA Forum, Less Wrong and the AI Alignment Forum (with a total of ~3100 karma);
- 5 papers published, 2 preprints, 1 submission and 1 revision;
- 24 other pieces of writing, including blogs, reports and talks;
- 7 code repositories contributed to;
- 2 podcasts (with 8 episodes produced at CEEALAR);
- 5 AI Safety / X-risk events, 1 rationality workshop and 2 EA retreats organised and hosted; 4 EA / X-risk retreats organised;
- 11 projects (that don't fit into the above) worked on.
See the other tabs above for highlights, and lists of all outputs sorted by cause area, date, grantee and type (see here for a spreadsheet of all the outputs). The tab "Next steps" shows the next steps of a selection of our grantees, following their stay at CEEALAR.
*this is the total cost of the project to date, not including the purchase of the building (£132,276.95 including building survey and conveyancing).
Key:
- Cause Area - Name of Grantee - Output Type - Title with link[C%*] (K**)
*C% = percentage counterfactual likelihood of happening without CEEALAR.
**K = Karma on EA Forum, Less Wrong, (Less Wrong; Alignment Forum).
2022 Q2
- AI Alignment - Lucas Teixeira - Placement - Internship - Internship at Conjecture
- AI Alignment - Samuel Knoche - Placement - Job - Becoming a Contractor for Open AI
- AI Alignment - Vinay Hiremath - Placement - Job - Job placement as Teaching Assistant for MLAB (Machine Learning Alignment Bootcamp)
- Meta or Community Building - Laura C - Placement - Job - Becoming Operations Lead for The Berlin Longtermist Hub [50%]
2022 Q1
- AI Alignment - Peter Barnett - Placement - Internship - Accepted for an internship at CHAI
- AI Alignment - Peter Barnett - Placement - Internship - Accepted into SERI ML Alignment Theory Scholars program
- Meta or Community Building related - Simmo Simpson - Placement - Job - Job placement as Executive Assistant to COO of Alvea.
2021 Q2
- AI Alignment - Quinn Dougherty - Placement - Internship - Obtained internship at SERI
- AI Alignment - Quinn Dougherty - Post (EA/LW/AF) - High Impact Careers in Formal Verification: Artificial Intelligence (25)
2021 Q1
- AI Alignment - Quinn Dougherty - Podcast - Technical AI Safety Podcast
2020 Q4
- Meta or Community Building related - CEEALAR - Statement - We had little in the way of concrete outputs this quarter due to diminished numbers (pandemic lockdown)
2020 Q3
- Animal Welfare related - Rhys Southan - Placement - PhD - Applied to a number of PhD programs in Philosophy and took up a place at Oxford University (Researching “Personal Identity, Value, and Ethics for Animals and AIs”) [40%]
- Meta or Community Building related - Denisa Pop - Placement - Internship - Incubatee and graduate of Charity Entrepreneurship 2020 Incubation Program [50%]
2020 Q2
- AI Alignment - Michele Campolo - Post (EA/LW/AF) - Contributions to the sequence Thoughts on Goal-Directedness [25%] (58; 30)
- Global Health and Development related - Derek Foster - Post (EA/LW/AF) - Market-shaping approaches to accelerate COVID-19 response: a role for option-based guarantees? [70%] (38)
- Global Health and Development related - Kris Gulati - Placement - PhD - Applied to a number of PhD programmes in Economics, and took up a place at Glasgow University
- Meta or Community Building related - Denisa Pop - Placement - Internship - Interned as a mental health research analyst at Charity Entrepreneurship [50%]
- X-Risks - Aron Mill - Other writing - Project - Helped Launch the Food Systems Handbook (announcement) (30)
- X-Risks - Aron Mill - Post (EA/LW/AF) - Food Crisis - Cascading Events from COVID-19 & Locusts (97)
2019 Q4
- AI Alignment - Anonymous 2 - Post (EA/LW/AF) - Critiquing “What failure looks like” (featured in MIRI’s Jan 2020 Newsletter) [1%] (35; 17)
- AI Alignment - Rafe Kennedy - Placement - Job - A job at the Machine Intelligence Research Institute (MIRI) (following a 3 month work trial)
- AI Alignment - Samuel Knoche - Code - Code for Style Transfer, Deep Dream and Pix2Pix implementation [5%]
- AI Alignment - Samuel Knoche - Code - NLP implementations [5%]
- AI Alignment - Samuel Knoche - Code - Code for lightweight Python deep learning library [5%]
- Global Health and Development related - Anders Huitfeldt - Post (EA/LW/AF) - Post on LessWrong: Effect heterogeneity and external validity in medicine (49)
- Meta or Community Building related - Samuel Knoche - Code - Lottery Ticket Hypothesis [5%]
- X-Risks - Markus Salmela - Paper (Published) - Joined the design team for the upcoming AI Strategy role-playing game Intelligence Rising and organised a series of events for testing the game [15%]
2019 Q3
- AI Alignment - - Event - AI Safety - AI Safety Learning By Doing Workshop (August 2019)
- AI Alignment - - Event - AI Safety - AI Safety Technical Unconference (August 2019) (retrospective written by a participant)
- AI Alignment - Anonymous 2 - Post (EA/LW/AF) - Distance Functions are Hard (31; 11)
- AI Alignment - Anonymous 2 - Post (EA/LW/AF) - The Moral Circle is not a Circle (36)
- AI Alignment - Davide Zagami - Paper (Published) - Coauthored the paper Categorizing Wireheading in Partially Embedded Agents, and presented a poster at the AI Safety Workshop in IJCAI 2019 [15%]
- Animal Welfare related - Max Carpendale - Post (EA/LW/AF) - Interview with Michael Tye about invertebrate consciousness [50%] (32)
2019 Q2
- Animal Welfare related - Max Carpendale - Post (EA/LW/AF) - Thoughts on the welfare of farmed insects [50%] (33)
- Animal Welfare related - Max Carpendale - Post (EA/LW/AF) - Interview with Shelley Adamo about invertebrate consciousness [50%] (37)
- Animal Welfare related - Max Carpendale - Post (EA/LW/AF) - Interview with Jon Mallatt about invertebrate consciousness (winner of 1st place EA Forum Prize for Apr 2019) [50%] (82)
2019 Q1
- AI Alignment - Davide Zagami - Post (EA/LW/AF) - RAISE lessons on Inverse Reinforcement Learning + their Supplementary Material [1%] (20)
- AI Alignment - Davide Zagami - Post (EA/LW/AF) - RAISE lessons on Fundamentals of Formalization [5%] (28)
- Animal Welfare related - Max Carpendale - Post (EA/LW/AF) - The Evolution of Sentience as a Factor in the Cambrian Explosion: Setting up the Question [50%] (29)
- Meta or Community Building related - Matt Goldenberg - Post (EA/LW/AF) - A Framework for Internal Debugging [5%] (41)
- Meta or Community Building related - Matt Goldenberg - Post (EA/LW/AF) - How to Understand and Mitigate Risk [5%] (55)
- Meta or Community Building related - Matt Goldenberg - Post (EA/LW/AF) - S-Curves for Trend Forecasting [5%] (99)
- Meta or Community Building related - Toon Alfrink - Post (EA/LW/AF) - EA is vetting-constrained [10%] (125)
2018 Q4
- AI Alignment - Anonymous 1 - Post (EA/LW/AF) - Should donor lottery winners write reports? (29)
- AI Alignment - Chris Leong - Post (EA/LW/AF) - On Disingenuity [50%] (28)
- Animal Welfare related - Frederik Bechtold - Placement - Internship - Received an (unpaid) internship at Animal Ethics [1%]
2018 Q3
- Animal Welfare related - Max Carpendale - Placement - Job - Got a research position (part-time) at Animal Ethics [25%]
Key:
- Quarter - Name of Grantee - Output Type - Title with link[C%*] (K**)
*C% = percentage counterfactual likelihood of happening without CEEALAR.
**K = Karma on EA Forum, Less Wrong, (Less Wrong; Alignment Forum).
AI Alignment
- 2022 Q2 - Aaron Maiwald - Course - courses on mathematics part of his bachelor's degree in Cognitive Science
- 2022 Q2 - Aaron Maiwald - Course - courses on ML part of his bachelor's degree in Cognitive Science
- 2022 Q2 - Lucas Teixeira - Placement - Internship - Internship at Conjecture
- 2022 Q2 - Michele Campolo - Post (EA/LW/AF) - Some alternative AI safety research projects (8; 2)
- 2022 Q2 - Samuel Knoche - Placement - Job - Becoming a Contractor for Open AI
- 2022 Q2 - Vinay Hiremath - Placement - Job - Job placement as Teaching Assistant for MLAB (Machine Learning Alignment Bootcamp)
- 2022 Q1 - David King - Course - AGISF
- 2022 Q1 - Jaeson Booker - Course - AGISF
- 2022 Q1 - Michele Campolo - Course -Attended online course on moral psychology by the University of Warwick.
- 2022 Q1 - Peter Barnett - Placement - Internship - Accepted for an internship at CHAI
- 2022 Q1 - Peter Barnett - Placement - Internship - Accepted into SERI ML Alignment Theory Scholars program
- 2022 Q1 - Peter Barnett - Post (EA/LW/AF) - Thoughts on Dangerous Learned Optimization (4)
- 2022 Q1 - Peter Barnett - Post (EA/LW/AF) - Alignment Problems All the Way Down (25)
- 2022 Q1 - Theo Knopfer - Course - AGISF
- 2022 Q1 - Vinay Hiremath - Course - AGISF
- 2021 Q4 - Charlie Steiner - Post (EA/LW/AF) - Reducing Goodhart sequence (115; 71)
- 2021 Q4 - Jaeson Booker - Code - AI Policy Simulator
- 2021 Q4 - Michele Campolo - Post (EA/LW/AF) - From language to ethics by automated reasoning (2; 2)
- 2021 Q3 - Michele Campolo - Event - Attended Good AI Badger Seminar: Beyond Life-long Learning via Modular Meta-Learning
- 2021 Q2 - Michele Campolo - Post (EA/LW/AF) - Naturalism and AI alignment (10; 5)
- 2021 Q2 - Quinn Dougherty - Course/event - AI Safety Camp
- 2021 Q2 - Quinn Dougherty - Placement - Internship - Obtained internship at SERI
- 2021 Q2 - Quinn Dougherty - Podcast - Multi-agent Reinforcement Learning in Sequential Social Dilemmas
- 2021 Q2 - Quinn Dougherty - Post (EA/LW/AF) - High Impact Careers in Formal Verification: Artificial Intelligence (25)
- 2021 Q2 - Samuel Knoche - Code - Creating code [1%]
- 2021 Q1 - Michele Campolo - Post (EA/LW/AF) - Contribution to Literature Review on Goal-Directedness [20%] (58; 30)
- 2021 Q1 - Quinn Dougherty - Podcast - Technical AI Safety Podcast Episode 2
- 2021 Q1 - Quinn Dougherty - Podcast - Technical AI Safety Podcast
- 2021 Q1 - Quinn Dougherty - Podcast - Technical AI Safety Podcast Episode 1
- 2021 Q1 - Quinn Dougherty - Podcast - Technical AI Safety Podcast Episode 3
- 2020 Q3 - Luminita Bogatean - Course - enrolled in the Open University’s bachelor degree Computing & IT and Design [20%]
- 2020 Q3 - Michele Campolo - Post (EA/LW/AF) - Decision Theory is multifaceted [25%] (6; 4)
- 2020 Q3 - Michele Campolo - Post (EA/LW/AF) - Goals and short descriptions [25%] (14; 7)
- 2020 Q3 - Michele Campolo - Post (EA/LW/AF) - Postponing research can sometimes be the optimal decision [25%] (28)
- 2020 Q2 - Michele Campolo - Post (EA/LW/AF) - Contributions to the sequence Thoughts on Goal-Directedness [25%] (58; 30)
- 2020 Q1 - Michele Campolo - Post (EA/LW/AF) - Wireheading and discontinuity [25%] (21; 11)
- 2019 Q4 - - Event - AI Safety - AI Safety Learning By Doing Workshop (October 2019)
- 2019 Q4 - - Event - X-risk - AI Strategy and X-Risk Unconference (AIXSU)
- 2019 Q4 - Anonymous 2 - Post (EA/LW/AF) - Some Comments on “Goodhart Taxonomy” [1%] (9; 4)
- 2019 Q4 - Anonymous 2 - Post (EA/LW/AF) - What are we assuming about utility functions? [1%] (17; 9)
- 2019 Q4 - Anonymous 2 - Post (EA/LW/AF) - Critiquing “What failure looks like” (featured in MIRI’s Jan 2020 Newsletter) [1%] (35; 17)
- 2019 Q4 - Anonymous 2 - Post (EA/LW/AF) - 8 AIS ideas [1%]
- 2019 Q4 - Linda Linsefors - Event - Organisation - Organized the AI Safety Learning By Doing Workshop (August and October 2019)
- 2019 Q4 - Luminita Bogatean - Course - Course: Python Programming: A Concise Introduction [20%]
- 2019 Q4 - Michele Campolo - Post (EA/LW/AF) - Thinking of tool AIs [0%] (6)
- 2019 Q4 - Rafe Kennedy - Placement - Job - A job at the Machine Intelligence Research Institute (MIRI) (following a 3 month work trial)
- 2019 Q4 - Samuel Knoche - Code - Code for Style Transfer, Deep Dream and Pix2Pix implementation [5%]
- 2019 Q4 - Samuel Knoche - Code - NLP implementations [5%]
- 2019 Q4 - Samuel Knoche - Code - Code for lightweight Python deep learning library [5%]
- 2019 Q3 - - Event - AI Safety - AI Safety Learning By Doing Workshop (August 2019)
- 2019 Q3 - - Event - AI Safety - AI Safety Technical Unconference (August 2019) (retrospective written by a participant)
- 2019 Q3 - Anonymous 2 - Post (EA/LW/AF) - Cognitive Dissonance and Veg*nism (7)
- 2019 Q3 - Anonymous 2 - Post (EA/LW/AF) - Non-anthropically, what makes us think human-level intelligence is possible? (9)
- 2019 Q3 - Anonymous 2 - Post (EA/LW/AF) - What are concrete examples of potential “lock-in” in AI research? (17; 11)
- 2019 Q3 - Anonymous 2 - Post (EA/LW/AF) - Distance Functions are Hard (31; 11)
- 2019 Q3 - Anonymous 2 - Post (EA/LW/AF) - The Moral Circle is not a Circle (36)
- 2019 Q3 - Davide Zagami - Paper (Published) - Coauthored the paper Categorizing Wireheading in Partially Embedded Agents, and presented a poster at the AI Safety Workshop in IJCAI 2019 [15%]
- 2019 Q3 - Linda Linsefors - Event - Organisation - Organized the AI Safety Technical Unconference (August 2019) (retrospective written by a participant) (36)
- 2019 Q3 - Linda Linsefors - Event - Organisation - Organized the AI Safety Learning By Doing Workshop (August and October 2019)
- 2019 Q3 - Linda Linsefors - Statement - “I think the biggest impact EA Hotel did for me, was about self growth. I got a lot of help to improve, but also the time and freedom to explore. I tried some projects that did not lead anywhere, like Code to Give. But getting to explore was necessary for me to figure out what to do. I finally landed on organising, which I’m still doing. AI Safety Support probably would not have existed with out the hotel.” [0%]
- 2019 Q1 - Anonymous 1 - Course - MITx Probability
- 2019 Q1 - Anonymous 1 - Course - Model Thinking
- 2019 Q1 - Anonymous 1 - Course - Probabilistic Graphical Models
- 2019 Q1 - Chris Leong - Post (EA/LW/AF) - Deconfusing Logical Counterfactuals [75%] (25; 6)
- 2019 Q1 - Chris Leong - Post (EA/LW/AF) - Debate AI and the Decision to Release an AI [90%] (9)
- 2019 Q1 - Davide Zagami - Post (EA/LW/AF) - RAISE lessons on Inverse Reinforcement Learning + their Supplementary Material [1%] (20)
- 2019 Q1 - Davide Zagami - Post (EA/LW/AF) - RAISE lessons on Fundamentals of Formalization [5%] (28)
- 2019 Q1 - John Maxwell - Course - MITx Probability
- 2019 Q1 - John Maxwell - Course - Improving Your Statistical Inferences (21 hours)
- 2019 Q1 - John Maxwell - Course - Statistical Learning
- 2019 Q1 - Linda Linsefors - Post (EA/LW/AF) - The Game Theory of Blackmail (“I don’t remember where the ideas behind this post came from, so it is hard for me to say what the counterfactual would have been. However, I did get help improving the post from other residents, so it would at least be less well written without the hotel.“) (24; 6)
- 2019 Q1 - Linda Linsefors - Post (EA/LW/AF) - Optimization Regularization through Time Penalty (“This post resulted from conversations at the EA Hotel [CEEALAR] and would not therefore not have happened without the hotel.”) [0%] (11; 7)
- 2019 Q1 - RAISE - Course Module produced - Nearly the entirety of this online course was created by grantees
- 2019 Q1 - RAISE - Course Module produced - Nearly the entirety of this online course was created by grantees
- 2019 Q1 - RAISE - Course Module produced - Nearly the entirety of this online course was created by grantees
- 2018 Q4 - Anonymous 1 - Post (EA/LW/AF) - Believing others’ priors (8)
- 2018 Q4 - Anonymous 1 - Post (EA/LW/AF) - AI development incentive gradients are not uniformly terrible (21)
- 2018 Q4 - Anonymous 1 - Post (EA/LW/AF) - Should donor lottery winners write reports? (29)
- 2018 Q4 - Chris Leong - Post (EA/LW/AF) - On Abstract Systems [50%] (14)
- 2018 Q4 - Chris Leong - Post (EA/LW/AF) - Summary: Surreal Decisions [50%] (24)
- 2018 Q4 - Chris Leong - Post (EA/LW/AF) - On Disingenuity [50%] (28)
- 2018 Q4 - Chris Leong - Post (EA/LW/AF) - An Extensive Categorisation of Infinite Paradoxes [80%] [80%] (-9)
- 2018 Q4 - John Maxwell - Course - ARIMA Modeling with R
- 2018 Q4 - John Maxwell - Course - Introduction to Recommender Systems (20-48 hours)
- 2018 Q4 - John Maxwell - Course - Formal Software Verification
- 2018 Q4 - John Maxwell - Course - Text Mining and Analytics
- 2018 Q3 - Anonymous 1 - Post (EA/LW/AF) - Annihilating aliens & Rare Earth suggest early filter (8)
- 2018 Q3 - John Maxwell - Course - Introduction to Time Series Analysis
- 2018 Q3 - John Maxwell - Course - Regression Models
Animal Welfare related
- 2020 Q3 - Rhys Southan - Placement - PhD - Applied to a number of PhD programs in Philosophy and took up a place at Oxford University (Researching “Personal Identity, Value, and Ethics for Animals and AIs”) [40%]
- 2020 Q1 - Rhys Southan - Other writing - Talk - Accepted to give a talk and a poster to the academic session of EAG 2020 in San Francisco
- 2019 Q4 - Rhys Southan - Paper (Submitted) - Wrote an academic philosophy essay about a problem for David Benatar’s pessimism about life and death, and submitted it to an academic journal. [10%]
- 2019 Q4 - Rhys Southan - Placement - Job - “I got a paid job writing an index for a book by a well-known moral philosopher. This job will help me continue to financially contribute to the EA Hotel [CEEALAR].” [20%]
- 2019 Q3 - Magnus Vinding - Idea - “I got the idea to write the book I’m currently writing (“Suffering-Focused Ethics”)”. [50%]
- 2019 Q3 - Magnus Vinding - Paper (Revising) - Revising journal paper for Between the Species. (“Got feedback and discussion about it I couldn’t have had otherwise; one reviewer happened to be a guest at the hotel [CEEALAR].”)
- 2019 Q3 - Max Carpendale - Post (EA/LW/AF) - Interview with Michael Tye about invertebrate consciousness [50%] (32)
- 2019 Q3 - Max Carpendale - Post (EA/LW/AF) - My recommendations for gratitude exercises [50%] (39)
- 2019 Q3 - Nix Goldowsky-Dill - Other writing - comment - EA Forum Comment Prize ($50), July 2019, for “comments on the impact of corporate cage-free campaigns” (11)
- 2019 Q3 - Rhys Southan - Other writing - Essay - Published an essay, Re-Orientation, about some of the possible personal and societal implications of sexual orientation conversion drugs that actually work [90%]
- 2019 Q3 - Rhys Southan - Placement - Job - Edited and partially rewrote a book on meat, treatment of farmed animals, and alternatives to factory farming (as a paid job [can’t yet name the book or its author due to non-disclosure agreement]) [70%]
- 2019 Q2 - Max Carpendale - Post (EA/LW/AF) - My recommendations for RSI treatment [25%] (69)
- 2019 Q2 - Max Carpendale - Post (EA/LW/AF) - Thoughts on the welfare of farmed insects [50%] (33)
- 2019 Q2 - Max Carpendale - Post (EA/LW/AF) - Interview with Shelley Adamo about invertebrate consciousness [50%] (37)
- 2019 Q2 - Max Carpendale - Post (EA/LW/AF) - Interview with Jon Mallatt about invertebrate consciousness (winner of 1st place EA Forum Prize for Apr 2019) [50%] (82)
- 2019 Q1 - Max Carpendale - Post (EA/LW/AF) - Sharks probably do feel pain: a reply to Michael Tye and others [50%] (21)
- 2019 Q1 - Max Carpendale - Post (EA/LW/AF) - The Evolution of Sentience as a Factor in the Cambrian Explosion: Setting up the Question [50%] (29)
- 2019 Q1 - Saulius Šimčikas - Post (EA/LW/AF) - Will companies meet their animal welfare commitments? (winner of 3rd place EA Forum Prize for Feb 2019) [96%] (112)
- 2019 Q1 - Saulius Šimčikas - Post (EA/LW/AF) - Rodents farmed for pet snake food [99%] (73)
- 2018 Q4 - Frederik Bechtold - Placement - Internship - Received an (unpaid) internship at Animal Ethics [1%]
- 2018 Q4 - Max Carpendale - Post (EA/LW/AF) - Why I’m focusing on invertebrate sentience [75%] (53)
- 2018 Q3 - Magnus Vinding - Other writing - Essay - Why Altruists Should Perhaps Not Prioritize Artificial Intelligence: A Lengthy Critique [99%]
- 2018 Q3 - Max Carpendale - Placement - Job - Got a research position (part-time) at Animal Ethics [25%]
Global Health and Development related
- 2021 Q3 - Nick Stares - Course - CORE The Economy
- 2020 Q3 - Derek Foster - Paper (Preprint) - Option-based guarantees to accelerate urgent, high risk vaccines: A new market-shaping approach [Preprint] [70%]
- 2020 Q3 - Derek Foster - Paper (Published) - Evaluating use cases for human challenge trials in accelerating SARS-CoV-2 vaccine development. Clinical Infectious Diseases. [70%]
- 2020 Q3 - Kris Gulati - Course - Audited M208 (Pure Maths) Linear Algebra and Real Analysis, The Open University
- 2020 Q3 - Kris Gulati - Statement - “All together I spent approximately 9/10 months in total at the Hotel [CEEALAR] (I had appendicitis and had a few breaks during my stay). The time at the Hotel was incredibly valuable to me. I completed the first year of a Maths degree via The Open University (with Distinction). On top of this, I self-studied Maths and Statistics (a mixture of Open University and MIT Opencourseware resources), covering pre-calculus, single-variable calculus, multivariable calculus, linear algebra, real analysis, probability theory, and statistical theory/applied statistics. This provided me with the mathematics/statistics knowledge to complete the coursework components at top-tier Economics PhD programmes.The Hotel [CEEALAR] also gave me the time to apply for PhD programmes. Sadly, I didn’t succeed in obtaining scholarships for my target school – The London School of Economics. However, I did receive a fully funded offer to study a two-year MRes in Economics at The University of Glasgow. Conditional upon doing well at Glasgow, the two-year MRes enables me to apply to top-tier PhD programmes afterwards. During my stay, I worked on some academic research (my MSc thesis, and an old anthropology paper), which will help my later PhD applications. I applied for a variety of large grants at OpenPhil and other EA organisations (which weren’t successful). I also applied to a fellowship at Wasteland Research (I reached the final round), which I couldn’t follow up on due to other work commitments (although I hope to apply in the future). Finally, I developed a few research ideas while at the Hotel. I’m now working on obtaining data socio-economic data on academic Economists. I’m also planning on running/hosting an experiment that tries to find the most convincing argument for long-termism. These ideas were conceived at the Hotel and I received a lot of feedback/help from current and previous residents. Counterfactually – if I wasn’t at the Hotel [CEEALAR] – I would have probably only been able to complete half of the Maths/Stats I learned. I probably wouldn’t have applied to any of the scholarships/grants/fellowships because I heard about them via residents at the Hotel. I also probably wouldn’t have had time to focus on completing my older research papers. Similarly, discussions with other residents spurred the new research ideas I’m working on.” [0%]
- 2020 Q2 - Derek Foster - Other writing - Project - Some of the content of https://1daysooner.org/ [70%]
- 2020 Q2 - Derek Foster - Other writing - Project - Various unpublished documents for the Happier Lives Institute [80%]
- 2020 Q2 - Derek Foster - Other writing - Report - Some confidential COVID-19-related policy reports [70%]
- 2020 Q2 - Derek Foster - Paper (Preprint) - Modelling the Health and Economic Impacts of Population-Wide Testing, Contact Tracing and Isolation (PTTI) Strategies for COVID-19 in the UK [Preprint]
- 2020 Q2 - Derek Foster - Post (EA/LW/AF) - Pueyo: How to Do Testing and Contact Tracing [Summary] [70%] (7)
- 2020 Q2 - Derek Foster - Post (EA/LW/AF) - Market-shaping approaches to accelerate COVID-19 response: a role for option-based guarantees? [70%] (38)
- 2020 Q2 - Derek Foster - Project - Parts of the Survey of COVID-19 Responses to Understand Behaviour (SCRUB)
- 2020 Q2 - Kris Gulati - Placement - PhD - Applied to a number of PhD programmes in Economics, and took up a place at Glasgow University
- 2020 Q1 - Derek Foster - Other writing - Project - A confidential evaluation of an anxiety app [90%]
- 2020 Q1 - Kris Gulati - Course - Distinctions in M140 (Statistics), The Open University
- 2020 Q1 - Kris Gulati - Course - Distinctions in MST125 (Mathematics), The Open University
- 2019 Q4 - Anders Huitfeldt - Paper (Published) - Scientific Article: Huitfeldt, A., Swanson, S. A., Stensrud, M. J., & Suzuki, E. (2019). Effect heterogeneity and variable selection for standardizing causal effects to a target population. European Journal of Epidemiology.
- 2019 Q4 - Anders Huitfeldt - Post (EA/LW/AF) - Post on EA Forum: Effect heterogeneity and external validity (6)
- 2019 Q4 - Anders Huitfeldt - Post (EA/LW/AF) - Post on LessWrong: Effect heterogeneity and external validity in medicine (49)
- 2019 Q4 - Kris Gulati - Course - Completed MA100 (Mathematical Methods)[auditing module], London School of Economics
- 2019 Q4 - Kris Gulati - Course - Completed GV100 (Intro to Political Theory)[auditing module], London School of Economics
- 2019 Q4 - Kris Gulati - Course - Completed ‘Justice’ (Harvard MOOC; Verified Certificate)
- 2019 Q3 - Derek Foster - Post (EA/LW/AF) - Rethink Grants: an evaluation of Donational’s Corporate Ambassador Program [95%] (54)
- 2019 Q3 - Kris Gulati - Course - Distinction in MU123 (Mathematics), The Open University
- 2019 Q3 - Kris Gulati - Course - Distinctions in MST124 (Mathematics), The Open University
- 2019 Q1 - Derek Foster - Other writing - Book Chapter - Priority Setting in Healthcare Through the Lens of Happiness – Chapter 3 of the 2019 Global Happiness & Wellbeing Policy Report [99%]
- 2018 Q4 - Derek Foster - Placement - Job - Hired as a research analyst for Rethink Priorities [95%]
Meta or Community Building
- 2022 Q2 - Laura C - Placement - Job - Becoming Operations Lead for The Berlin Longtermist Hub [50%]
- 2022 Q2 - Luminita Bogatean - Course - First out of three courses in the UX Programme by CareerFoundry
- 2022 Q1 - Severin Seehrich - Event - Organisation - Organised Summer Solstice Celebration event for EAs
- 2022 Q1 - Severin Seehrich - Post (EA/LW/AF) - The Berlin Hub: Longtermist co-living space (plan) (78)
- 2022 Q1 - Severin Seehrich - Project - Designed The Berlin Longtermist Hub
Meta or Community Building related
- 2022 Q1 - Denisa Pop - Project - $9,000 grant from the Effective Altruism Infrastructure Fund to organize group activities for improving self-awareness and communication in EAs [50%]
- 2022 Q1 - Simmo Simpson - Placement - Job - Job placement as Executive Assistant to COO of Alvea.
- 2022 Q1 - Vinay Hiremath - Project - Built website for SERI conference
- 2021 Q4 - Aaron Maiwald - Podcast - 70% of the work for the Gutes Einfach Tun Podcast Episodes : 3, 4 ,5 and 6
- 2021 Q4 - Simmo Simpson - Course - Became a Association of Coaching accredited coach
- 2021 Q3 - Aaron Maiwald - Podcast - Gutes Einfach Tun Podcast lauch & Episode 1
- 2021 Q3 - Aaron Maiwald - Podcast - Gutes Einfach Tun Podcast lauch & Episode 2
- 2021 Q2 - Anonymous - Placement - Job - Obtained job for the EA org Momentum
- 2021 Q2 - Denisa Pop - Course - Introduction into Group Facilitation
- 2021 Q2 - Jack Harley - Project - Created Longevity wiki and expanded the team to 8 members
- 2021 Q2 - Quinn Dougherty - Event - Organisation - organising an event for AI Philadelphia, inviting anti-aging expert Jack Harley
- 2021 Q2 - Quinn Dougherty - Post (EA/LW/AF) - What am I fighting for? (8)
- 2021 Q2 - Quinn Dougherty - Post (EA/LW/AF) - Cliffnotes to Craft of Research parts I, II, and III
- 2020 Q4 - CEEALAR - Statement - We had little in the way of concrete outputs this quarter due to diminished numbers (pandemic lockdown)
- 2020 Q3 - Denisa Pop - Placement - Internship - Incubatee and graduate of Charity Entrepreneurship 2020 Incubation Program [50%]
- 2020 Q3 - Samuel Knoche - Post (EA/LW/AF) - The Best Educational Institution in the World [1%] (11)
- 2020 Q3 - Samuel Knoche - Post (EA/LW/AF) - The Case for Education [1%] (16)
- 2020 Q3 - Samuel Knoche - Post (EA/LW/AF) - Blogs I’ve Been Reading [1%]
- 2020 Q3 - Samuel Knoche - Post (EA/LW/AF) - Books I’ve Been Reading [1%]
- 2020 Q3 - Samuel Knoche - Post (EA/LW/AF) - List of Peter Thiel’s Online Writings [5%]
- 2020 Q2 - Denisa Pop - Placement - Internship - Interned as a mental health research analyst at Charity Entrepreneurship [50%]
- 2020 Q2 - Luminita Bogatean - Placement - Job - Becoming Operations Manager at CEEALAR
- 2020 Q2 - Samuel Knoche - Other writing - Blog - Students are Employees [1%]
- 2020 Q2 - Samuel Knoche - Other writing - Blog - My Quarantine Reading List [1%]
- 2020 Q2 - Samuel Knoche - Other writing - Blog - The End of Education [1%]
- 2020 Q2 - Samuel Knoche - Other writing - Blog - How Steven Pinker Could Become Really Rich [1%]
- 2020 Q2 - Samuel Knoche - Other writing - Blog - The Public Good of Education [1%]
- 2020 Q2 - Samuel Knoche - Other writing - Blog - Trillion Dollar Bills on the Sidewalk [1%]
- 2020 Q2 - Samuel Knoche - Other writing - Blog - On Schools and Churches [1%]
- 2020 Q2 - Samuel Knoche - Other writing - Blog - Questions to Guide Life and Learning [1%]
- 2020 Q2 - Samuel Knoche - Other writing - Blog - Online Standardized Tests Are a Bad Idea [1%]
- 2020 Q2 - Samuel Knoche - Other writing - Blog - Harari on Religions and Ideologies [1%]
- 2020 Q2 - Samuel Knoche - Other writing - Blog - The Single Best Policy to Combat Climate Change [1%]
- 2020 Q2 - Samuel Knoche - Post (EA/LW/AF) - Patrick Collison on Effective Altruism [1%] (74)
- 2020 Q1 - Samuel Knoche - Other writing - Blog - What I Learned Dropping Out of High School [1%]
- 2019 Q4 - Samuel Knoche - Code - Lottery Ticket Hypothesis [5%]
- 2019 Q2 - - Event - Rationality Workshop - Athena Rationality Workshop (June 2019) (retrospective) (24)
- 2019 Q2 - Denisa Pop - Other writing - Talk - Researched and developing presentations and workshops in Rational Compassion: see How we might save the world by becoming super-dogs [0%]
- 2019 Q2 - Denisa Pop - Project - Becoming Interim Community & Projects Manager at CEEALAR and offering residents counseling/coaching sessions (productivity & mental health) [0%]
- 2019 Q2 - Matt Goldenberg - Event - Organisation - Organizer and instructor for the Athena Rationality Workshop (June 2019)
- 2019 Q1 - - Event - EA Retreat - EA Glasgow (March 2019)
- 2019 Q1 - Denisa Pop - Event - Organisation - Helped organise the EA Values-to-Actions Retreat [33%]
- 2019 Q1 - Denisa Pop - Event - Organisation - Helped organise the EA Community Health Unconference [33%]
- 2019 Q1 - Matt Goldenberg - Code - The entirety of Project Metis [5%]
- 2019 Q1 - Matt Goldenberg - Post (EA/LW/AF) - What Vibing Feels Like [5%] (19)
- 2019 Q1 - Matt Goldenberg - Post (EA/LW/AF) - A Framework for Internal Debugging [5%] (41)
- 2019 Q1 - Matt Goldenberg - Post (EA/LW/AF) - How to Understand and Mitigate Risk [5%] (55)
- 2019 Q1 - Matt Goldenberg - Post (EA/LW/AF) - S-Curves for Trend Forecasting [5%] (99)
- 2019 Q1 - Matt Goldenberg - Post (EA/LW/AF) - The 3 Books Technique for Learning a New Skill [5%] (175)
- 2019 Q1 - Toon Alfrink - Post (EA/LW/AF) - EA is vetting-constrained [10%] (125)
- 2019 Q1 - Toon Alfrink - Post (EA/LW/AF) - Task Y: representing EA in your field [90%] (11)
- 2019 Q1 - Toon Alfrink - Post (EA/LW/AF) - What makes a good culture? [90%] (29)
- 2019 Q1 - Toon Alfrink - Post (EA/LW/AF) - The Home Base of EA [90%]
- 2018 Q4 - Toon Alfrink - Post (EA/LW/AF) - The housekeeper [10%] (23)
- 2018 Q4 - Toon Alfrink - Post (EA/LW/AF) - We can all be high status [10%] (56)
- 2018 Q3 - - Event - EA Retreat - EA London Retreats: Life Review Weekend (Aug. 24th – 27th 2018); Careers Week (Aug. 27th – 31st 2018); Holiday/EA Unconference (Aug. 31st – Sept. 3rd 2018)
X-Risks
- 2022 Q2 - Theo Knopfer - Event - Organisation - A grant from CEA to organise retreats on x-risk in France
- 2022 Q2 - Theo Knopfer - Course - Accepted into CHERI Summer Fellowship
- 2022 Q1 - Theo Knopfer - Course - https://www.trainingforgood.com/policy-careers-europe
- 2022 Q1 - Theo Knopfer - Course - Weapons of Mass Destruction by IEGA
- 2021 Q2 - Quinn Dougherty - Post (EA/LW/AF) - Transcript: EA Philly's Infodemics Event Part 2: Aviv Ovadya (8)
- 2020 Q3 - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- 2020 Q3 - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- 2020 Q3 - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- 2020 Q3 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q3 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q3 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q3 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q3 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q3 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q2 - Aron Mill - Other writing - Project - Helped Launch the Food Systems Handbook (announcement) (30)
- 2020 Q2 - Aron Mill - Post (EA/LW/AF) - Food Crisis - Cascading Events from COVID-19 & Locusts (97)
- 2020 Q2 - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- 2020 Q2 - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- 2020 Q2 - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- 2020 Q2 - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- 2020 Q2 - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- 2020 Q2 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q2 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q2 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q2 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q2 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q2 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q1 - David Kristoffersson - Post (EA/LW/AF) - State Space of X-Risk Trajectories [95%] (24)
- 2020 Q1 - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- 2020 Q1 - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- 2020 Q1 - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- 2020 Q1 - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- 2020 Q1 - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- 2020 Q1 - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- 2020 Q1 - David Kristoffersson - Post (EA/LW/AF) - The ‘far future’ is not just the far future [99%] (29)
- 2020 Q1 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q1 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q1 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q1 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q1 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q1 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q1 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q1 - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- 2020 Q1 - Michael Aird - Post (EA/LW/AF) - Four components of strategy research [50%] (21)
- 2020 Q1 - Michael Aird - Post (EA/LW/AF) - Using vector fields to visualise preferences and make them consistent [90%] (39; 7)
- 2020 Q1 - Michael Aird - Post (EA/LW/AF) - Value uncertainty [95%] (16)
- 2019 Q4 - David Kristoffersson - Event - Organisation - Organised AI Strategy and X-Risk Unconference (AIXSU) [1%] (5)
- 2019 Q4 - Markus Salmela - Paper (Published) - Joined the design team for the upcoming AI Strategy role-playing game Intelligence Rising and organised a series of events for testing the game [15%]
- 2019 Q3 - David Kristoffersson - Project - Applied for 501c3 non-profit status for Convergence [non-profit status approved in 2019] [95%]
- 2019 Q3 - Justin Shovelain - Project - Got non-profit status for Convergence Analysis and established it legally [90%]
- 2019 Q1 - David Kristoffersson - Other writing - Talk - Designed Convergence presentation (slides, notes) and held it at the Future of Humanity Institute [80%]
- 2019 Q1 - David Kristoffersson - Project - Defined a recruitment plan for a researcher-writer role and publicized a job ad [90%]
- 2019 Q1 - David Kristoffersson - Project - Built new website for Convergence [90%]
- 2019 Q1 - Markus Salmela - Paper (Published) - Coauthored the paper Long-Term Trajectories of Human Civilization [99%]
- 2018 Q4 - David Kristoffersson - Project - Incorporated Convergence [95%]
Key:
- Cause Area - Name of Grantee - Output Type - Title with link[C%*] (K**)
*C% = percentage counterfactual likelihood of happening without CEEALAR.
**K = Karma on EA Forum, Less Wrong, (Less Wrong; Alignment Forum).
2022 Q2
- AI Alignment - Aaron Maiwald - Course - courses on mathematics part of his bachelor's degree in Cognitive Science
- AI Alignment - Aaron Maiwald - Course - courses on ML part of his bachelor's degree in Cognitive Science
- AI Alignment - Lucas Teixeira - Placement - Internship - Internship at Conjecture
- AI Alignment - Michele Campolo - Post (EA/LW/AF) - Some alternative AI safety research projects (8; 2)
- AI Alignment - Samuel Knoche - Placement - Job - Becoming a Contractor for Open AI
- AI Alignment - Vinay Hiremath - Placement - Job - Job placement as Teaching Assistant for MLAB (Machine Learning Alignment Bootcamp)
- Meta or Community Building - Laura C - Placement - Job - Becoming Operations Lead for The Berlin Longtermist Hub [50%]
- Meta or Community Building - Luminita Bogatean - Course - First out of three courses in the UX Programme by CareerFoundry
- x-Risks - Theo Knopfer - Event - Organisation - A grant from CEA to organise retreats on x-risk in France
- X-Risks - Theo Knopfer - Course - Accepted into CHERI Summer Fellowship
2022 Q1
- AI Alignment - David King - Course - AGISF
- AI Alignment - Jaeson Booker - Course - AGISF
- AI Alignment - Michele Campolo - Course -Attended online course on moral psychology by the University of Warwick.
- AI Alignment - Peter Barnett - Placement - Internship - Accepted for an internship at CHAI
- AI Alignment - Peter Barnett - Placement - Internship - Accepted into SERI ML Alignment Theory Scholars program
- AI Alignment - Peter Barnett - Post (EA/LW/AF) - Thoughts on Dangerous Learned Optimization (4)
- AI Alignment - Peter Barnett - Post (EA/LW/AF) - Alignment Problems All the Way Down (25)
- AI Alignment - Theo Knopfer - Course - AGISF
- AI Alignment - Vinay Hiremath - Course - AGISF
- Meta or Community Building - Severin Seehrich - Event - Organisation - Organised Summer Solstice Celebration event for EAs
- Meta or Community Building - Severin Seehrich - Post (EA/LW/AF) - The Berlin Hub: Longtermist co-living space (plan) (78)
- Meta or Community Building - Severin Seehrich - Project - Designed The Berlin Longtermist Hub
- Meta or Community Building related - Denisa Pop - Project - $9,000 grant from the Effective Altruism Infrastructure Fund to organize group activities for improving self-awareness and communication in EAs [50%]
- Meta or Community Building related - Simmo Simpson - Placement - Job - Job placement as Executive Assistant to COO of Alvea.
- Meta or Community Building related - Vinay Hiremath - Project - Built website for SERI conference
- X-Risks - Theo Knopfer - Course - https://www.trainingforgood.com/policy-careers-europe
- X-Risks - Theo Knopfer - Course - Weapons of Mass Destruction by IEGA
2021 Q4
- AI Alignment - Charlie Steiner - Post (EA/LW/AF) - Reducing Goodhart sequence (115; 71)
- AI Alignment - Jaeson Booker - Code - AI Policy Simulator
- AI Alignment - Michele Campolo - Post (EA/LW/AF) - From language to ethics by automated reasoning (2; 2)
- Meta or Community Building related - Aaron Maiwald - Podcast - 70% of the work for the Gutes Einfach Tun Podcast Episodes : 3, 4 ,5 and 6
- Meta or Community Building related - Simmo Simpson - Course - Became a Association of Coaching accredited coach
2021 Q3
- AI Alignment - Michele Campolo - Event - Attended Good AI Badger Seminar: Beyond Life-long Learning via Modular Meta-Learning
- Global Health and Development related - Nick Stares - Course - CORE The Economy
- Meta or Community Building related - Aaron Maiwald - Podcast - Gutes Einfach Tun Podcast lauch & Episode 1
- Meta or Community Building related - Aaron Maiwald - Podcast - Gutes Einfach Tun Podcast lauch & Episode 2
2021 Q2
- AI Alignment - Michele Campolo - Post (EA/LW/AF) - Naturalism and AI alignment (10; 5)
- AI Alignment - Quinn Dougherty - Course/event - AI Safety Camp
- AI Alignment - Quinn Dougherty - Placement - Internship - Obtained internship at SERI
- AI Alignment - Quinn Dougherty - Podcast - Multi-agent Reinforcement Learning in Sequential Social Dilemmas
- AI Alignment - Quinn Dougherty - Post (EA/LW/AF) - High Impact Careers in Formal Verification: Artificial Intelligence (25)
- AI Alignment - Samuel Knoche - Code - Creating code [1%]
- Meta or Community Building related - Anonymous - Placement - Job - Obtained job for the EA org Momentum
- Meta or Community Building related - Denisa Pop - Course - Introduction into Group Facilitation
- Meta or Community Building related - Jack Harley - Project - Created Longevity wiki and expanded the team to 8 members
- Meta or Community Building related - Quinn Dougherty - Event - Organisation - organising an event for AI Philadelphia, inviting anti-aging expert Jack Harley
- Meta or Community Building related - Quinn Dougherty - Post (EA/LW/AF) - What am I fighting for? (8)
- Meta or Community Building related - Quinn Dougherty - Post (EA/LW/AF) - Cliffnotes to Craft of Research parts I, II, and III
- X-Risks - Quinn Dougherty - Post (EA/LW/AF) - Transcript: EA Philly's Infodemics Event Part 2: Aviv Ovadya (8)
2021 Q1
- AI Alignment - Michele Campolo - Post (EA/LW/AF) - Contribution to Literature Review on Goal-Directedness [20%] (58; 30)
- AI Alignment - Quinn Dougherty - Podcast - Technical AI Safety Podcast Episode 2
- AI Alignment - Quinn Dougherty - Podcast - Technical AI Safety Podcast
- AI Alignment - Quinn Dougherty - Podcast - Technical AI Safety Podcast Episode 1
- AI Alignment - Quinn Dougherty - Podcast - Technical AI Safety Podcast Episode 3
2020 Q4
- Meta or Community Building related - CEEALAR - Statement - We had little in the way of concrete outputs this quarter due to diminished numbers (pandemic lockdown)
2020 Q3
- AI Alignment - Luminita Bogatean - Course - enrolled in the Open University’s bachelor degree Computing & IT and Design [20%]
- AI Alignment - Michele Campolo - Post (EA/LW/AF) - Decision Theory is multifaceted [25%] (6; 4)
- AI Alignment - Michele Campolo - Post (EA/LW/AF) - Goals and short descriptions [25%] (14; 7)
- AI Alignment - Michele Campolo - Post (EA/LW/AF) - Postponing research can sometimes be the optimal decision [25%] (28)
- Animal Welfare related - Rhys Southan - Placement - PhD - Applied to a number of PhD programs in Philosophy and took up a place at Oxford University (Researching “Personal Identity, Value, and Ethics for Animals and AIs”) [40%]
- Global Health and Development related - Derek Foster - Paper (Preprint) - Option-based guarantees to accelerate urgent, high risk vaccines: A new market-shaping approach [Preprint] [70%]
- Global Health and Development related - Derek Foster - Paper (Published) - Evaluating use cases for human challenge trials in accelerating SARS-CoV-2 vaccine development. Clinical Infectious Diseases. [70%]
- Global Health and Development related - Kris Gulati - Course - Audited M208 (Pure Maths) Linear Algebra and Real Analysis, The Open University
- Global Health and Development related - Kris Gulati - Statement - “All together I spent approximately 9/10 months in total at the Hotel [CEEALAR] (I had appendicitis and had a few breaks during my stay). The time at the Hotel was incredibly valuable to me. I completed the first year of a Maths degree via The Open University (with Distinction). On top of this, I self-studied Maths and Statistics (a mixture of Open University and MIT Opencourseware resources), covering pre-calculus, single-variable calculus, multivariable calculus, linear algebra, real analysis, probability theory, and statistical theory/applied statistics. This provided me with the mathematics/statistics knowledge to complete the coursework components at top-tier Economics PhD programmes.The Hotel [CEEALAR] also gave me the time to apply for PhD programmes. Sadly, I didn’t succeed in obtaining scholarships for my target school – The London School of Economics. However, I did receive a fully funded offer to study a two-year MRes in Economics at The University of Glasgow. Conditional upon doing well at Glasgow, the two-year MRes enables me to apply to top-tier PhD programmes afterwards. During my stay, I worked on some academic research (my MSc thesis, and an old anthropology paper), which will help my later PhD applications. I applied for a variety of large grants at OpenPhil and other EA organisations (which weren’t successful). I also applied to a fellowship at Wasteland Research (I reached the final round), which I couldn’t follow up on due to other work commitments (although I hope to apply in the future). Finally, I developed a few research ideas while at the Hotel. I’m now working on obtaining data socio-economic data on academic Economists. I’m also planning on running/hosting an experiment that tries to find the most convincing argument for long-termism. These ideas were conceived at the Hotel and I received a lot of feedback/help from current and previous residents. Counterfactually – if I wasn’t at the Hotel [CEEALAR] – I would have probably only been able to complete half of the Maths/Stats I learned. I probably wouldn’t have applied to any of the scholarships/grants/fellowships because I heard about them via residents at the Hotel. I also probably wouldn’t have had time to focus on completing my older research papers. Similarly, discussions with other residents spurred the new research ideas I’m working on.” [0%]
- Meta or Community Building related - Denisa Pop - Placement - Internship - Incubatee and graduate of Charity Entrepreneurship 2020 Incubation Program [50%]
- Meta or Community Building related - Samuel Knoche - Post (EA/LW/AF) - The Best Educational Institution in the World [1%] (11)
- Meta or Community Building related - Samuel Knoche - Post (EA/LW/AF) - The Case for Education [1%] (16)
- Meta or Community Building related - Samuel Knoche - Post (EA/LW/AF) - Blogs I’ve Been Reading [1%]
- Meta or Community Building related - Samuel Knoche - Post (EA/LW/AF) - Books I’ve Been Reading [1%]
- Meta or Community Building related - Samuel Knoche - Post (EA/LW/AF) - List of Peter Thiel’s Online Writings [5%]
- X-Risks - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
2020 Q2
- AI Alignment - Michele Campolo - Post (EA/LW/AF) - Contributions to the sequence Thoughts on Goal-Directedness [25%] (58; 30)
- Global Health and Development related - Derek Foster - Other writing - Project - Some of the content of https://1daysooner.org/ [70%]
- Global Health and Development related - Derek Foster - Other writing - Project - Various unpublished documents for the Happier Lives Institute [80%]
- Global Health and Development related - Derek Foster - Other writing - Report - Some confidential COVID-19-related policy reports [70%]
- Global Health and Development related - Derek Foster - Paper (Preprint) - Modelling the Health and Economic Impacts of Population-Wide Testing, Contact Tracing and Isolation (PTTI) Strategies for COVID-19 in the UK [Preprint]
- Global Health and Development related - Derek Foster - Post (EA/LW/AF) - Pueyo: How to Do Testing and Contact Tracing [Summary] [70%] (7)
- Global Health and Development related - Derek Foster - Post (EA/LW/AF) - Market-shaping approaches to accelerate COVID-19 response: a role for option-based guarantees? [70%] (38)
- Global Health and Development related - Derek Foster - Project - Parts of the Survey of COVID-19 Responses to Understand Behaviour (SCRUB)
- Global Health and Development related - Kris Gulati - Placement - PhD - Applied to a number of PhD programmes in Economics, and took up a place at Glasgow University
- Meta or Community Building related - Denisa Pop - Placement - Internship - Interned as a mental health research analyst at Charity Entrepreneurship [50%]
- Meta or Community Building related - Luminita Bogatean - Placement - Job - Becoming Operations Manager at CEEALAR
- Meta or Community Building related - Samuel Knoche - Other writing - Blog - Students are Employees [1%]
- Meta or Community Building related - Samuel Knoche - Other writing - Blog - My Quarantine Reading List [1%]
- Meta or Community Building related - Samuel Knoche - Other writing - Blog - The End of Education [1%]
- Meta or Community Building related - Samuel Knoche - Other writing - Blog - How Steven Pinker Could Become Really Rich [1%]
- Meta or Community Building related - Samuel Knoche - Other writing - Blog - The Public Good of Education [1%]
- Meta or Community Building related - Samuel Knoche - Other writing - Blog - Trillion Dollar Bills on the Sidewalk [1%]
- Meta or Community Building related - Samuel Knoche - Other writing - Blog - On Schools and Churches [1%]
- Meta or Community Building related - Samuel Knoche - Other writing - Blog - Questions to Guide Life and Learning [1%]
- Meta or Community Building related - Samuel Knoche - Other writing - Blog - Online Standardized Tests Are a Bad Idea [1%]
- Meta or Community Building related - Samuel Knoche - Other writing - Blog - Harari on Religions and Ideologies [1%]
- Meta or Community Building related - Samuel Knoche - Other writing - Blog - The Single Best Policy to Combat Climate Change [1%]
- Meta or Community Building related - Samuel Knoche - Post (EA/LW/AF) - Patrick Collison on Effective Altruism [1%] (74)
- X-Risks - Aron Mill - Other writing - Project - Helped Launch the Food Systems Handbook (announcement) (30)
- X-Risks - Aron Mill - Post (EA/LW/AF) - Food Crisis - Cascading Events from COVID-19 & Locusts (97)
- X-Risks - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
2020 Q1
- AI Alignment - Michele Campolo - Post (EA/LW/AF) - Wireheading and discontinuity [25%] (21; 11)
- Animal Welfare related - Rhys Southan - Other writing - Talk - Accepted to give a talk and a poster to the academic session of EAG 2020 in San Francisco
- Global Health and Development related - Derek Foster - Other writing - Project - A confidential evaluation of an anxiety app [90%]
- Global Health and Development related - Kris Gulati - Course - Distinctions in M140 (Statistics), The Open University
- Global Health and Development related - Kris Gulati - Course - Distinctions in MST125 (Mathematics), The Open University
- Meta or Community Building related - Samuel Knoche - Other writing - Blog - What I Learned Dropping Out of High School [1%]
- X-Risks - David Kristoffersson - Post (EA/LW/AF) - State Space of X-Risk Trajectories [95%] (24)
- X-Risks - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - David Kristoffersson - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - David Kristoffersson - Post (EA/LW/AF) - The ‘far future’ is not just the far future [99%] (29)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - Justin Shovelain - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - Michael Aird - Post (EA/LW/AF) - Four components of strategy research [50%] (21)
- X-Risks - Michael Aird - Post (EA/LW/AF) - Using vector fields to visualise preferences and make them consistent [90%] (39; 7)
- X-Risks - Michael Aird - Post (EA/LW/AF) - Value uncertainty [95%] (16)
2019 Q4
- AI Alignment - - Event - AI Safety - AI Safety Learning By Doing Workshop (October 2019)
- AI Alignment - - Event - X-risk - AI Strategy and X-Risk Unconference (AIXSU)
- AI Alignment - Anonymous 2 - Post (EA/LW/AF) - Some Comments on “Goodhart Taxonomy” [1%] (9; 4)
- AI Alignment - Anonymous 2 - Post (EA/LW/AF) - What are we assuming about utility functions? [1%] (17; 9)
- AI Alignment - Anonymous 2 - Post (EA/LW/AF) - Critiquing “What failure looks like” (featured in MIRI’s Jan 2020 Newsletter) [1%] (35; 17)
- AI Alignment - Anonymous 2 - Post (EA/LW/AF) - 8 AIS ideas [1%]
- AI Alignment - Linda Linsefors - Event - Organisation - Organized the AI Safety Learning By Doing Workshop (August and October 2019)
- AI Alignment - Luminita Bogatean - Course - Course: Python Programming: A Concise Introduction [20%]
- AI Alignment - Michele Campolo - Post (EA/LW/AF) - Thinking of tool AIs [0%] (6)
- AI Alignment - Rafe Kennedy - Placement - Job - A job at the Machine Intelligence Research Institute (MIRI) (following a 3 month work trial)
- AI Alignment - Samuel Knoche - Code - Code for Style Transfer, Deep Dream and Pix2Pix implementation [5%]
- AI Alignment - Samuel Knoche - Code - NLP implementations [5%]
- AI Alignment - Samuel Knoche - Code - Code for lightweight Python deep learning library [5%]
- Animal Welfare related - Rhys Southan - Paper (Submitted) - Wrote an academic philosophy essay about a problem for David Benatar’s pessimism about life and death, and submitted it to an academic journal. [10%]
- Animal Welfare related - Rhys Southan - Placement - Job - “I got a paid job writing an index for a book by a well-known moral philosopher. This job will help me continue to financially contribute to the EA Hotel [CEEALAR].” [20%]
- Global Health and Development related - Anders Huitfeldt - Paper (Published) - Scientific Article: Huitfeldt, A., Swanson, S. A., Stensrud, M. J., & Suzuki, E. (2019). Effect heterogeneity and variable selection for standardizing causal effects to a target population. European Journal of Epidemiology.
- Global Health and Development related - Anders Huitfeldt - Post (EA/LW/AF) - Post on EA Forum: Effect heterogeneity and external validity (6)
- Global Health and Development related - Anders Huitfeldt - Post (EA/LW/AF) - Post on LessWrong: Effect heterogeneity and external validity in medicine (49)
- Global Health and Development related - Kris Gulati - Course - Completed MA100 (Mathematical Methods)[auditing module], London School of Economics
- Global Health and Development related - Kris Gulati - Course - Completed GV100 (Intro to Political Theory)[auditing module], London School of Economics
- Global Health and Development related - Kris Gulati - Course - Completed ‘Justice’ (Harvard MOOC; Verified Certificate)
- Meta or Community Building related - Samuel Knoche - Code - Lottery Ticket Hypothesis [5%]
- X-Risks - David Kristoffersson - Event - Organisation - Organised AI Strategy and X-Risk Unconference (AIXSU) [1%] (5)
- X-Risks - Markus Salmela - Paper (Published) - Joined the design team for the upcoming AI Strategy role-playing game Intelligence Rising and organised a series of events for testing the game [15%]
2019 Q3
- AI Alignment - - Event - AI Safety - AI Safety Learning By Doing Workshop (August 2019)
- AI Alignment - - Event - AI Safety - AI Safety Technical Unconference (August 2019) (retrospective written by a participant)
- AI Alignment - Anonymous 2 - Post (EA/LW/AF) - Cognitive Dissonance and Veg*nism (7)
- AI Alignment - Anonymous 2 - Post (EA/LW/AF) - Non-anthropically, what makes us think human-level intelligence is possible? (9)
- AI Alignment - Anonymous 2 - Post (EA/LW/AF) - What are concrete examples of potential “lock-in” in AI research? (17; 11)
- AI Alignment - Anonymous 2 - Post (EA/LW/AF) - Distance Functions are Hard (31; 11)
- AI Alignment - Anonymous 2 - Post (EA/LW/AF) - The Moral Circle is not a Circle (36)
- AI Alignment - Davide Zagami - Paper (Published) - Coauthored the paper Categorizing Wireheading in Partially Embedded Agents, and presented a poster at the AI Safety Workshop in IJCAI 2019 [15%]
- AI Alignment - Linda Linsefors - Event - Organisation - Organized the AI Safety Technical Unconference (August 2019) (retrospective written by a participant) (36)
- AI Alignment - Linda Linsefors - Event - Organisation - Organized the AI Safety Learning By Doing Workshop (August and October 2019)
- AI Alignment - Linda Linsefors - Statement - “I think the biggest impact EA Hotel did for me, was about self growth. I got a lot of help to improve, but also the time and freedom to explore. I tried some projects that did not lead anywhere, like Code to Give. But getting to explore was necessary for me to figure out what to do. I finally landed on organising, which I’m still doing. AI Safety Support probably would not have existed with out the hotel.” [0%]
- Animal Welfare related - Magnus Vinding - Idea - “I got the idea to write the book I’m currently writing (“Suffering-Focused Ethics”)”. [50%]
- Animal Welfare related - Magnus Vinding - Paper (Revising) - Revising journal paper for Between the Species. (“Got feedback and discussion about it I couldn’t have had otherwise; one reviewer happened to be a guest at the hotel [CEEALAR].”)
- Animal Welfare related - Max Carpendale - Post (EA/LW/AF) - Interview with Michael Tye about invertebrate consciousness [50%] (32)
- Animal Welfare related - Max Carpendale - Post (EA/LW/AF) - My recommendations for gratitude exercises [50%] (39)
- Animal Welfare related - Nix Goldowsky-Dill - Other writing - comment - EA Forum Comment Prize ($50), July 2019, for “comments on the impact of corporate cage-free campaigns” (11)
- Animal Welfare related - Rhys Southan - Other writing - Essay - Published an essay, Re-Orientation, about some of the possible personal and societal implications of sexual orientation conversion drugs that actually work [90%]
- Animal Welfare related - Rhys Southan - Placement - Job - Edited and partially rewrote a book on meat, treatment of farmed animals, and alternatives to factory farming (as a paid job [can’t yet name the book or its author due to non-disclosure agreement]) [70%]
- Global Health and Development related - Derek Foster - Post (EA/LW/AF) - Rethink Grants: an evaluation of Donational’s Corporate Ambassador Program [95%] (54)
- Global Health and Development related - Kris Gulati - Course - Distinction in MU123 (Mathematics), The Open University
- Global Health and Development related - Kris Gulati - Course - Distinctions in MST124 (Mathematics), The Open University
- X-Risks - David Kristoffersson - Project - Applied for 501c3 non-profit status for Convergence [non-profit status approved in 2019] [95%]
- X-Risks - Justin Shovelain - Project - Got non-profit status for Convergence Analysis and established it legally [90%]
2019 Q2
- Animal Welfare related - Max Carpendale - Post (EA/LW/AF) - My recommendations for RSI treatment [25%] (69)
- Animal Welfare related - Max Carpendale - Post (EA/LW/AF) - Thoughts on the welfare of farmed insects [50%] (33)
- Animal Welfare related - Max Carpendale - Post (EA/LW/AF) - Interview with Shelley Adamo about invertebrate consciousness [50%] (37)
- Animal Welfare related - Max Carpendale - Post (EA/LW/AF) - Interview with Jon Mallatt about invertebrate consciousness (winner of 1st place EA Forum Prize for Apr 2019) [50%] (82)
- Meta or Community Building related - - Event - Rationality Workshop - Athena Rationality Workshop (June 2019) (retrospective) (24)
- Meta or Community Building related - Denisa Pop - Other writing - Talk - Researched and developing presentations and workshops in Rational Compassion: see How we might save the world by becoming super-dogs [0%]
- Meta or Community Building related - Denisa Pop - Project - Becoming Interim Community & Projects Manager at CEEALAR and offering residents counseling/coaching sessions (productivity & mental health) [0%]
- Meta or Community Building related - Matt Goldenberg - Event - Organisation - Organizer and instructor for the Athena Rationality Workshop (June 2019)
2019 Q1
- AI Alignment - Anonymous 1 - Course - MITx Probability
- AI Alignment - Anonymous 1 - Course - Model Thinking
- AI Alignment - Anonymous 1 - Course - Probabilistic Graphical Models
- AI Alignment - Chris Leong - Post (EA/LW/AF) - Deconfusing Logical Counterfactuals [75%] (25; 6)
- AI Alignment - Chris Leong - Post (EA/LW/AF) - Debate AI and the Decision to Release an AI [90%] (9)
- AI Alignment - Davide Zagami - Post (EA/LW/AF) - RAISE lessons on Inverse Reinforcement Learning + their Supplementary Material [1%] (20)
- AI Alignment - Davide Zagami - Post (EA/LW/AF) - RAISE lessons on Fundamentals of Formalization [5%] (28)
- AI Alignment - John Maxwell - Course - MITx Probability
- AI Alignment - John Maxwell - Course - Improving Your Statistical Inferences (21 hours)
- AI Alignment - John Maxwell - Course - Statistical Learning
- AI Alignment - Linda Linsefors - Post (EA/LW/AF) - The Game Theory of Blackmail (“I don’t remember where the ideas behind this post came from, so it is hard for me to say what the counterfactual would have been. However, I did get help improving the post from other residents, so it would at least be less well written without the hotel.“) (24; 6)
- AI Alignment - Linda Linsefors - Post (EA/LW/AF) - Optimization Regularization through Time Penalty (“This post resulted from conversations at the EA Hotel [CEEALAR] and would not therefore not have happened without the hotel.”) [0%] (11; 7)
- AI Alignment - RAISE - Course Module produced - Nearly the entirety of this online course was created by grantees
- AI Alignment - RAISE - Course Module produced - Nearly the entirety of this online course was created by grantees
- AI Alignment - RAISE - Course Module produced - Nearly the entirety of this online course was created by grantees
- Animal Welfare related - Max Carpendale - Post (EA/LW/AF) - Sharks probably do feel pain: a reply to Michael Tye and others [50%] (21)
- Animal Welfare related - Max Carpendale - Post (EA/LW/AF) - The Evolution of Sentience as a Factor in the Cambrian Explosion: Setting up the Question [50%] (29)
- Animal Welfare related - Saulius Šimčikas - Post (EA/LW/AF) - Will companies meet their animal welfare commitments? (winner of 3rd place EA Forum Prize for Feb 2019) [96%] (112)
- Animal Welfare related - Saulius Šimčikas - Post (EA/LW/AF) - Rodents farmed for pet snake food [99%] (73)
- Global Health and Development related - Derek Foster - Other writing - Book Chapter - Priority Setting in Healthcare Through the Lens of Happiness – Chapter 3 of the 2019 Global Happiness & Wellbeing Policy Report [99%]
- Meta or Community Building related - - Event - EA Retreat - EA Glasgow (March 2019)
- Meta or Community Building related - Denisa Pop - Event - Organisation - Helped organise the EA Values-to-Actions Retreat [33%]
- Meta or Community Building related - Denisa Pop - Event - Organisation - Helped organise the EA Community Health Unconference [33%]
- Meta or Community Building related - Matt Goldenberg - Code - The entirety of Project Metis [5%]
- Meta or Community Building related - Matt Goldenberg - Post (EA/LW/AF) - What Vibing Feels Like [5%] (19)
- Meta or Community Building related - Matt Goldenberg - Post (EA/LW/AF) - A Framework for Internal Debugging [5%] (41)
- Meta or Community Building related - Matt Goldenberg - Post (EA/LW/AF) - How to Understand and Mitigate Risk [5%] (55)
- Meta or Community Building related - Matt Goldenberg - Post (EA/LW/AF) - S-Curves for Trend Forecasting [5%] (99)
- Meta or Community Building related - Matt Goldenberg - Post (EA/LW/AF) - The 3 Books Technique for Learning a New Skill [5%] (175)
- Meta or Community Building related - Toon Alfrink - Post (EA/LW/AF) - EA is vetting-constrained [10%] (125)
- Meta or Community Building related - Toon Alfrink - Post (EA/LW/AF) - Task Y: representing EA in your field [90%] (11)
- Meta or Community Building related - Toon Alfrink - Post (EA/LW/AF) - What makes a good culture? [90%] (29)
- Meta or Community Building related - Toon Alfrink - Post (EA/LW/AF) - The Home Base of EA [90%]
- X-Risks - David Kristoffersson - Other writing - Talk - Designed Convergence presentation (slides, notes) and held it at the Future of Humanity Institute [80%]
- X-Risks - David Kristoffersson - Project - Defined a recruitment plan for a researcher-writer role and publicized a job ad [90%]
- X-Risks - David Kristoffersson - Project - Built new website for Convergence [90%]
- X-Risks - Markus Salmela - Paper (Published) - Coauthored the paper Long-Term Trajectories of Human Civilization [99%]
2018 Q4
- AI Alignment - Anonymous 1 - Post (EA/LW/AF) - Believing others’ priors (8)
- AI Alignment - Anonymous 1 - Post (EA/LW/AF) - AI development incentive gradients are not uniformly terrible (21)
- AI Alignment - Anonymous 1 - Post (EA/LW/AF) - Should donor lottery winners write reports? (29)
- AI Alignment - Chris Leong - Post (EA/LW/AF) - On Abstract Systems [50%] (14)
- AI Alignment - Chris Leong - Post (EA/LW/AF) - Summary: Surreal Decisions [50%] (24)
- AI Alignment - Chris Leong - Post (EA/LW/AF) - On Disingenuity [50%] (28)
- AI Alignment - Chris Leong - Post (EA/LW/AF) - An Extensive Categorisation of Infinite Paradoxes [80%] [80%] (-9)
- AI Alignment - John Maxwell - Course - ARIMA Modeling with R
- AI Alignment - John Maxwell - Course - Introduction to Recommender Systems (20-48 hours)
- AI Alignment - John Maxwell - Course - Formal Software Verification
- AI Alignment - John Maxwell - Course - Text Mining and Analytics
- Animal Welfare related - Frederik Bechtold - Placement - Internship - Received an (unpaid) internship at Animal Ethics [1%]
- Animal Welfare related - Max Carpendale - Post (EA/LW/AF) - Why I’m focusing on invertebrate sentience [75%] (53)
- Global Health and Development related - Derek Foster - Placement - Job - Hired as a research analyst for Rethink Priorities [95%]
- Meta or Community Building related - Toon Alfrink - Post (EA/LW/AF) - The housekeeper [10%] (23)
- Meta or Community Building related - Toon Alfrink - Post (EA/LW/AF) - We can all be high status [10%] (56)
- X-Risks - David Kristoffersson - Project - Incorporated Convergence [95%]
2018 Q3
- AI Alignment - Anonymous 1 - Post (EA/LW/AF) - Annihilating aliens & Rare Earth suggest early filter (8)
- AI Alignment - John Maxwell - Course - Introduction to Time Series Analysis
- AI Alignment - John Maxwell - Course - Regression Models
- Animal Welfare related - Magnus Vinding - Other writing - Essay - Why Altruists Should Perhaps Not Prioritize Artificial Intelligence: A Lengthy Critique [99%]
- Animal Welfare related - Max Carpendale - Placement - Job - Got a research position (part-time) at Animal Ethics [25%]
- Meta or Community Building related - - Event - EA Retreat - EA London Retreats: Life Review Weekend (Aug. 24th – 27th 2018); Careers Week (Aug. 27th – 31st 2018); Holiday/EA Unconference (Aug. 31st – Sept. 3rd 2018)
Aaron Maiwald
Anders Huitfeldt
Anonymous
Anonymous 1
Anonymous 2
Aron Mill
CEEALAR
Charlie Steiner
Chris Leong
Davide Zagami
David King
David Kristoffersson
Denisa Pop
Key:
- Cause Area - Quarter - Output Type - Title with link[C%*] (K**)
*C% = percentage counterfactual likelihood of happening without CEEALAR.
**K = Karma on EA Forum, Less Wrong, (Less Wrong; Alignment Forum).
- AI Alignment - 2019 Q4 - Event - AI Safety - AI Safety Learning By Doing Workshop (October 2019)
- AI Alignment - 2019 Q4 - Event - X-risk - AI Strategy and X-Risk Unconference (AIXSU)
- AI Alignment - 2019 Q3 - Event - AI Safety - AI Safety Learning By Doing Workshop (August 2019)
- AI Alignment - 2019 Q3 - Event - AI Safety - AI Safety Technical Unconference (August 2019) (retrospective written by a participant)
- Meta or Community Building related - 2019 Q2 - Event - Rationality Workshop - Athena Rationality Workshop (June 2019) (retrospective) (24)
- Meta or Community Building related - 2019 Q1 - Event - EA Retreat - EA Glasgow (March 2019)
- Meta or Community Building related - 2018 Q3 - Event - EA Retreat - EA London Retreats: Life Review Weekend (Aug. 24th – 27th 2018); Careers Week (Aug. 27th – 31st 2018); Holiday/EA Unconference (Aug. 31st – Sept. 3rd 2018)
Aaron Maiwald
- AI Alignment - 2022 Q2 - Course - courses on mathematics part of his bachelor's degree in Cognitive Science
- AI Alignment - 2022 Q2 - Course - courses on ML part of his bachelor's degree in Cognitive Science
- Meta or Community Building related - 2021 Q4 - Podcast - 70% of the work for the Gutes Einfach Tun Podcast Episodes : 3, 4 ,5 and 6
- Meta or Community Building related - 2021 Q3 - Podcast - Gutes Einfach Tun Podcast lauch & Episode 1
- Meta or Community Building related - 2021 Q3 - Podcast - Gutes Einfach Tun Podcast lauch & Episode 2
Anders Huitfeldt
- Global Health and Development related - 2019 Q4 - Paper (Published) - Scientific Article: Huitfeldt, A., Swanson, S. A., Stensrud, M. J., & Suzuki, E. (2019). Effect heterogeneity and variable selection for standardizing causal effects to a target population. European Journal of Epidemiology.
- Global Health and Development related - 2019 Q4 - Post (EA/LW/AF) - Post on EA Forum: Effect heterogeneity and external validity (6)
- Global Health and Development related - 2019 Q4 - Post (EA/LW/AF) - Post on LessWrong: Effect heterogeneity and external validity in medicine (49)
Anonymous
- Meta or Community Building related - 2021 Q2 - Placement - Job - Obtained job for the EA org Momentum
Anonymous 1
- AI Alignment - 2019 Q1 - Course - MITx Probability
- AI Alignment - 2019 Q1 - Course - Model Thinking
- AI Alignment - 2019 Q1 - Course - Probabilistic Graphical Models
- AI Alignment - 2018 Q4 - Post (EA/LW/AF) - Believing others’ priors (8)
- AI Alignment - 2018 Q4 - Post (EA/LW/AF) - AI development incentive gradients are not uniformly terrible (21)
- AI Alignment - 2018 Q4 - Post (EA/LW/AF) - Should donor lottery winners write reports? (29)
- AI Alignment - 2018 Q3 - Post (EA/LW/AF) - Annihilating aliens & Rare Earth suggest early filter (8)
Anonymous 2
- AI Alignment - 2019 Q4 - Post (EA/LW/AF) - Some Comments on “Goodhart Taxonomy” [1%] (9; 4)
- AI Alignment - 2019 Q4 - Post (EA/LW/AF) - What are we assuming about utility functions? [1%] (17; 9)
- AI Alignment - 2019 Q4 - Post (EA/LW/AF) - Critiquing “What failure looks like” (featured in MIRI’s Jan 2020 Newsletter) [1%] (35; 17)
- AI Alignment - 2019 Q4 - Post (EA/LW/AF) - 8 AIS ideas [1%]
- AI Alignment - 2019 Q3 - Post (EA/LW/AF) - Cognitive Dissonance and Veg*nism (7)
- AI Alignment - 2019 Q3 - Post (EA/LW/AF) - Non-anthropically, what makes us think human-level intelligence is possible? (9)
- AI Alignment - 2019 Q3 - Post (EA/LW/AF) - What are concrete examples of potential “lock-in” in AI research? (17; 11)
- AI Alignment - 2019 Q3 - Post (EA/LW/AF) - Distance Functions are Hard (31; 11)
- AI Alignment - 2019 Q3 - Post (EA/LW/AF) - The Moral Circle is not a Circle (36)
Aron Mill
- X-Risks - 2020 Q2 - Other writing - Project - Helped Launch the Food Systems Handbook (announcement) (30)
- X-Risks - 2020 Q2 - Post (EA/LW/AF) - Food Crisis - Cascading Events from COVID-19 & Locusts (97)
CEEALAR
- Meta or Community Building related - 2020 Q4 - Statement - We had little in the way of concrete outputs this quarter due to diminished numbers (pandemic lockdown)
Charlie Steiner
- AI Alignment - 2021 Q4 - Post (EA/LW/AF) - Reducing Goodhart sequence (115; 71)
Chris Leong
- AI Alignment - 2019 Q1 - Post (EA/LW/AF) - Deconfusing Logical Counterfactuals [75%] (25; 6)
- AI Alignment - 2019 Q1 - Post (EA/LW/AF) - Debate AI and the Decision to Release an AI [90%] (9)
- AI Alignment - 2018 Q4 - Post (EA/LW/AF) - On Abstract Systems [50%] (14)
- AI Alignment - 2018 Q4 - Post (EA/LW/AF) - Summary: Surreal Decisions [50%] (24)
- AI Alignment - 2018 Q4 - Post (EA/LW/AF) - On Disingenuity [50%] (28)
- AI Alignment - 2018 Q4 - Post (EA/LW/AF) - An Extensive Categorisation of Infinite Paradoxes [80%] [80%] (-9)
Davide Zagami
- AI Alignment - 2019 Q3 - Paper (Published) - Coauthored the paper Categorizing Wireheading in Partially Embedded Agents, and presented a poster at the AI Safety Workshop in IJCAI 2019 [15%]
- AI Alignment - 2019 Q1 - Post (EA/LW/AF) - RAISE lessons on Inverse Reinforcement Learning + their Supplementary Material [1%] (20)
- AI Alignment - 2019 Q1 - Post (EA/LW/AF) - RAISE lessons on Fundamentals of Formalization [5%] (28)
David King
- AI Alignment - 2022 Q1 - Course - AGISF
David Kristoffersson
- X-Risks - 2020 Q3 - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q3 - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q3 - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q2 - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q2 - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q2 - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q2 - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q2 - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q1 - Post (EA/LW/AF) - State Space of X-Risk Trajectories [95%] (24)
- X-Risks - 2020 Q1 - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q1 - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q1 - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q1 - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q1 - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q1 - Post (EA/LW/AF) - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q1 - Post (EA/LW/AF) - The ‘far future’ is not just the far future [99%] (29)
- X-Risks - 2019 Q4 - Event - Organisation - Organised AI Strategy and X-Risk Unconference (AIXSU) [1%] (5)
- X-Risks - 2019 Q3 - Project - Applied for 501c3 non-profit status for Convergence [non-profit status approved in 2019] [95%]
- X-Risks - 2019 Q1 - Other writing - Talk - Designed Convergence presentation (slides, notes) and held it at the Future of Humanity Institute [80%]
- X-Risks - 2019 Q1 - Project - Defined a recruitment plan for a researcher-writer role and publicized a job ad [90%]
- X-Risks - 2019 Q1 - Project - Built new website for Convergence [90%]
- X-Risks - 2018 Q4 - Project - Incorporated Convergence [95%]
Denisa Pop
- Meta or Community Building related - 2022 Q1 - Project - $9,000 grant from the Effective Altruism Infrastructure Fund to organize group activities for improving self-awareness and communication in EAs [50%]
- Meta or Community Building related - 2021 Q2 - Course - Introduction into Group Facilitation
- Meta or Community Building related - 2020 Q3 - Placement - Internship - Incubatee and graduate of Charity Entrepreneurship 2020 Incubation Program [50%]
- Meta or Community Building related - 2020 Q2 - Placement - Internship - Interned as a mental health research analyst at Charity Entrepreneurship [50%]
- Meta or Community Building related - 2019 Q2 - Other writing - Talk - Researched and developing presentations and workshops in Rational Compassion: see How we might save the world by becoming super-dogs [0%]
- Meta or Community Building related - 2019 Q2 - Project - Becoming Interim Community & Projects Manager at CEEALAR and offering residents counseling/coaching sessions (productivity & mental health) [0%]
- Meta or Community Building related - 2019 Q1 - Event - Organisation - Helped organise the EA Values-to-Actions Retreat [33%]
- Meta or Community Building related - 2019 Q1 - Event - Organisation - Helped organise the EA Community Health Unconference [33%]
Derek Foster
- Global Health and Development related - 2020 Q3 - Paper (Preprint) - Option-based guarantees to accelerate urgent, high risk vaccines: A new market-shaping approach [Preprint] [70%]
- Global Health and Development related - 2020 Q3 - Paper (Published) - Evaluating use cases for human challenge trials in accelerating SARS-CoV-2 vaccine development. Clinical Infectious Diseases. [70%]
- Global Health and Development related - 2020 Q2 - Other writing - Project - Some of the content of https://1daysooner.org/ [70%]
- Global Health and Development related - 2020 Q2 - Other writing - Project - Various unpublished documents for the Happier Lives Institute [80%]
- Global Health and Development related - 2020 Q2 - Other writing - Report - Some confidential COVID-19-related policy reports [70%]
- Global Health and Development related - 2020 Q2 - Paper (Preprint) - Modelling the Health and Economic Impacts of Population-Wide Testing, Contact Tracing and Isolation (PTTI) Strategies for COVID-19 in the UK [Preprint]
- Global Health and Development related - 2020 Q2 - Post (EA/LW/AF) - Pueyo: How to Do Testing and Contact Tracing [Summary] [70%] (7)
- Global Health and Development related - 2020 Q2 - Post (EA/LW/AF) - Market-shaping approaches to accelerate COVID-19 response: a role for option-based guarantees? [70%] (38)
- Global Health and Development related - 2020 Q2 - Project - Parts of the Survey of COVID-19 Responses to Understand Behaviour (SCRUB)
- Global Health and Development related - 2020 Q1 - Other writing - Project - A confidential evaluation of an anxiety app [90%]
- Global Health and Development related - 2019 Q3 - Post (EA/LW/AF) - Rethink Grants: an evaluation of Donational’s Corporate Ambassador Program [95%] (54)
- Global Health and Development related - 2019 Q1 - Other writing - Book Chapter - Priority Setting in Healthcare Through the Lens of Happiness – Chapter 3 of the 2019 Global Happiness & Wellbeing Policy Report [99%]
- Global Health and Development related - 2018 Q4 - Placement - Job - Hired as a research analyst for Rethink Priorities [95%]
Frederik Bechtold
- Animal Welfare related - 2018 Q4 - Placement - Internship - Received an (unpaid) internship at Animal Ethics [1%]
Jack Harley
- Meta or Community Building related - 2021 Q2 - Project - Created Longevity wiki and expanded the team to 8 members
Jaeson Booker
- AI Alignment - 2022 Q1 - Course - AGISF
- AI Alignment - 2021 Q4 - Code - AI Policy Simulator
John Maxwell
- AI Alignment - 2019 Q1 - Course - MITx Probability
- AI Alignment - 2019 Q1 - Course - Improving Your Statistical Inferences (21 hours)
- AI Alignment - 2019 Q1 - Course - Statistical Learning
- AI Alignment - 2018 Q4 - Course - ARIMA Modeling with R
- AI Alignment - 2018 Q4 - Course - Introduction to Recommender Systems (20-48 hours)
- AI Alignment - 2018 Q4 - Course - Formal Software Verification
- AI Alignment - 2018 Q4 - Course - Text Mining and Analytics
- AI Alignment - 2018 Q3 - Course - Introduction to Time Series Analysis
- AI Alignment - 2018 Q3 - Course - Regression Models
Justin Shovelain
- X-Risks - 2020 Q3 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q3 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q3 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q3 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q3 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q3 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q2 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q2 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q2 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q2 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q2 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q2 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q1 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q1 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q1 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q1 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q1 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q1 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q1 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q1 - Post (EA/LW/AF) - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2019 Q3 - Project - Got non-profit status for Convergence Analysis and established it legally [90%]
Kris Gulati
- Global Health and Development related - 2020 Q3 - Course - Audited M208 (Pure Maths) Linear Algebra and Real Analysis, The Open University
- Global Health and Development related - 2020 Q3 - Statement - “All together I spent approximately 9/10 months in total at the Hotel [CEEALAR] (I had appendicitis and had a few breaks during my stay). The time at the Hotel was incredibly valuable to me. I completed the first year of a Maths degree via The Open University (with Distinction). On top of this, I self-studied Maths and Statistics (a mixture of Open University and MIT Opencourseware resources), covering pre-calculus, single-variable calculus, multivariable calculus, linear algebra, real analysis, probability theory, and statistical theory/applied statistics. This provided me with the mathematics/statistics knowledge to complete the coursework components at top-tier Economics PhD programmes.The Hotel [CEEALAR] also gave me the time to apply for PhD programmes. Sadly, I didn’t succeed in obtaining scholarships for my target school – The London School of Economics. However, I did receive a fully funded offer to study a two-year MRes in Economics at The University of Glasgow. Conditional upon doing well at Glasgow, the two-year MRes enables me to apply to top-tier PhD programmes afterwards. During my stay, I worked on some academic research (my MSc thesis, and an old anthropology paper), which will help my later PhD applications. I applied for a variety of large grants at OpenPhil and other EA organisations (which weren’t successful). I also applied to a fellowship at Wasteland Research (I reached the final round), which I couldn’t follow up on due to other work commitments (although I hope to apply in the future). Finally, I developed a few research ideas while at the Hotel. I’m now working on obtaining data socio-economic data on academic Economists. I’m also planning on running/hosting an experiment that tries to find the most convincing argument for long-termism. These ideas were conceived at the Hotel and I received a lot of feedback/help from current and previous residents. Counterfactually – if I wasn’t at the Hotel [CEEALAR] – I would have probably only been able to complete half of the Maths/Stats I learned. I probably wouldn’t have applied to any of the scholarships/grants/fellowships because I heard about them via residents at the Hotel. I also probably wouldn’t have had time to focus on completing my older research papers. Similarly, discussions with other residents spurred the new research ideas I’m working on.” [0%]
- Global Health and Development related - 2020 Q2 - Placement - PhD - Applied to a number of PhD programmes in Economics, and took up a place at Glasgow University
- Global Health and Development related - 2020 Q1 - Course - Distinctions in M140 (Statistics), The Open University
- Global Health and Development related - 2020 Q1 - Course - Distinctions in MST125 (Mathematics), The Open University
- Global Health and Development related - 2019 Q4 - Course - Completed MA100 (Mathematical Methods)[auditing module], London School of Economics
- Global Health and Development related - 2019 Q4 - Course - Completed GV100 (Intro to Political Theory)[auditing module], London School of Economics
- Global Health and Development related - 2019 Q4 - Course - Completed ‘Justice’ (Harvard MOOC; Verified Certificate)
- Global Health and Development related - 2019 Q3 - Course - Distinction in MU123 (Mathematics), The Open University
- Global Health and Development related - 2019 Q3 - Course - Distinctions in MST124 (Mathematics), The Open University
Laura C
- Meta or Community Building - 2022 Q2 - Placement - Job - Becoming Operations Lead for The Berlin Longtermist Hub [50%]
Linda Linsefors
- AI Alignment - 2019 Q4 - Event - Organisation - Organized the AI Safety Learning By Doing Workshop (August and October 2019)
- AI Alignment - 2019 Q3 - Event - Organisation - Organized the AI Safety Technical Unconference (August 2019) (retrospective written by a participant) (36)
- AI Alignment - 2019 Q3 - Event - Organisation - Organized the AI Safety Learning By Doing Workshop (August and October 2019)
- AI Alignment - 2019 Q3 - Statement - “I think the biggest impact EA Hotel did for me, was about self growth. I got a lot of help to improve, but also the time and freedom to explore. I tried some projects that did not lead anywhere, like Code to Give. But getting to explore was necessary for me to figure out what to do. I finally landed on organising, which I’m still doing. AI Safety Support probably would not have existed with out the hotel.” [0%]
- AI Alignment - 2019 Q1 - Post (EA/LW/AF) - The Game Theory of Blackmail (“I don’t remember where the ideas behind this post came from, so it is hard for me to say what the counterfactual would have been. However, I did get help improving the post from other residents, so it would at least be less well written without the hotel.“) (24; 6)
- AI Alignment - 2019 Q1 - Post (EA/LW/AF) - Optimization Regularization through Time Penalty (“This post resulted from conversations at the EA Hotel [CEEALAR] and would not therefore not have happened without the hotel.”) [0%] (11; 7)
Lucas Teixeira
- AI Alignment - 2022 Q2 - Placement - Internship - Internship at Conjecture
Luminita Bogatean
- Meta or Community Building - 2022 Q2 - Course - First out of three courses in the UX Programme by CareerFoundry
- AI Alignment - 2020 Q3 - Course - enrolled in the Open University’s bachelor degree Computing & IT and Design [20%]
- Meta or Community Building related - 2020 Q2 - Placement - Job - Becoming Operations Manager at CEEALAR
- AI Alignment - 2019 Q4 - Course - Course: Python Programming: A Concise Introduction [20%]
Magnus Vinding
- Animal Welfare related - 2019 Q3 - Idea - “I got the idea to write the book I’m currently writing (“Suffering-Focused Ethics”)”. [50%]
- Animal Welfare related - 2019 Q3 - Paper (Revising) - Revising journal paper for Between the Species. (“Got feedback and discussion about it I couldn’t have had otherwise; one reviewer happened to be a guest at the hotel [CEEALAR].”)
- Animal Welfare related - 2018 Q3 - Other writing - Essay - Why Altruists Should Perhaps Not Prioritize Artificial Intelligence: A Lengthy Critique [99%]
Markus Salmela
- X-Risks - 2019 Q4 - Paper (Published) - Joined the design team for the upcoming AI Strategy role-playing game Intelligence Rising and organised a series of events for testing the game [15%]
- X-Risks - 2019 Q1 - Paper (Published) - Coauthored the paper Long-Term Trajectories of Human Civilization [99%]
Matt Goldenberg
- Meta or Community Building related - 2019 Q2 - Event - Organisation - Organizer and instructor for the Athena Rationality Workshop (June 2019)
- Meta or Community Building related - 2019 Q1 - Code - The entirety of Project Metis [5%]
- Meta or Community Building related - 2019 Q1 - Post (EA/LW/AF) - What Vibing Feels Like [5%] (19)
- Meta or Community Building related - 2019 Q1 - Post (EA/LW/AF) - A Framework for Internal Debugging [5%] (41)
- Meta or Community Building related - 2019 Q1 - Post (EA/LW/AF) - How to Understand and Mitigate Risk [5%] (55)
- Meta or Community Building related - 2019 Q1 - Post (EA/LW/AF) - S-Curves for Trend Forecasting [5%] (99)
- Meta or Community Building related - 2019 Q1 - Post (EA/LW/AF) - The 3 Books Technique for Learning a New Skill [5%] (175)
Max Carpendale
- Animal Welfare related - 2019 Q3 - Post (EA/LW/AF) - Interview with Michael Tye about invertebrate consciousness [50%] (32)
- Animal Welfare related - 2019 Q3 - Post (EA/LW/AF) - My recommendations for gratitude exercises [50%] (39)
- Animal Welfare related - 2019 Q2 - Post (EA/LW/AF) - My recommendations for RSI treatment [25%] (69)
- Animal Welfare related - 2019 Q2 - Post (EA/LW/AF) - Thoughts on the welfare of farmed insects [50%] (33)
- Animal Welfare related - 2019 Q2 - Post (EA/LW/AF) - Interview with Shelley Adamo about invertebrate consciousness [50%] (37)
- Animal Welfare related - 2019 Q2 - Post (EA/LW/AF) - Interview with Jon Mallatt about invertebrate consciousness (winner of 1st place EA Forum Prize for Apr 2019) [50%] (82)
- Animal Welfare related - 2019 Q1 - Post (EA/LW/AF) - Sharks probably do feel pain: a reply to Michael Tye and others [50%] (21)
- Animal Welfare related - 2019 Q1 - Post (EA/LW/AF) - The Evolution of Sentience as a Factor in the Cambrian Explosion: Setting up the Question [50%] (29)
- Animal Welfare related - 2018 Q4 - Post (EA/LW/AF) - Why I’m focusing on invertebrate sentience [75%] (53)
- Animal Welfare related - 2018 Q3 - Placement - Job - Got a research position (part-time) at Animal Ethics [25%]
Michael Aird
- X-Risks - 2020 Q1 - Post (EA/LW/AF) - Four components of strategy research [50%] (21)
- X-Risks - 2020 Q1 - Post (EA/LW/AF) - Using vector fields to visualise preferences and make them consistent [90%] (39; 7)
- X-Risks - 2020 Q1 - Post (EA/LW/AF) - Value uncertainty [95%] (16)
Michele Campolo
- AI Alignment - 2022 Q2 - Post (EA/LW/AF) - Some alternative AI safety research projects (8; 2)
- AI Alignment - 2022 Q1 - Course -
Attended online course on moral psychology by the University of Warwick.
- AI Alignment - 2021 Q4 - Post (EA/LW/AF) - From language to ethics by automated reasoning (2; 2)
- AI Alignment - 2021 Q3 - Event - Attended Good AI Badger Seminar: Beyond Life-long Learning via Modular Meta-Learning
- AI Alignment - 2021 Q2 - Post (EA/LW/AF) - Naturalism and AI alignment (10; 5)
- AI Alignment - 2021 Q1 - Post (EA/LW/AF) - Contribution to Literature Review on Goal-Directedness [20%] (58; 30)
- AI Alignment - 2020 Q3 - Post (EA/LW/AF) - Decision Theory is multifaceted [25%] (6; 4)
- AI Alignment - 2020 Q3 - Post (EA/LW/AF) - Goals and short descriptions [25%] (14; 7)
- AI Alignment - 2020 Q3 - Post (EA/LW/AF) - Postponing research can sometimes be the optimal decision [25%] (28)
- AI Alignment - 2020 Q2 - Post (EA/LW/AF) - Contributions to the sequence Thoughts on Goal-Directedness [25%] (58; 30)
- AI Alignment - 2020 Q1 - Post (EA/LW/AF) - Wireheading and discontinuity [25%] (21; 11)
- AI Alignment - 2019 Q4 - Post (EA/LW/AF) - Thinking of tool AIs [0%] (6)
Nick Stares
- Global Health and Development related - 2021 Q3 - Course - CORE The Economy
Nix Goldowsky-Dill
- Animal Welfare related - 2019 Q3 - Other writing - comment - EA Forum Comment Prize ($50), July 2019, for “comments on the impact of corporate cage-free campaigns” (11)
Peter Barnett
- AI Alignment - 2022 Q1 - Placement - Internship - Accepted for an internship at CHAI
- AI Alignment - 2022 Q1 - Placement - Internship - Accepted into SERI ML Alignment Theory Scholars program
- AI Alignment - 2022 Q1 - Post (EA/LW/AF) - Thoughts on Dangerous Learned Optimization (4)
- AI Alignment - 2022 Q1 - Post (EA/LW/AF) - Alignment Problems All the Way Down (25)
Quinn Dougherty
- AI Alignment - 2021 Q2 - Course/event - AI Safety Camp
- AI Alignment - 2021 Q2 - Placement - Internship - Obtained internship at SERI
- AI Alignment - 2021 Q2 - Podcast - Multi-agent Reinforcement Learning in Sequential Social Dilemmas
- AI Alignment - 2021 Q2 - Post (EA/LW/AF) - High Impact Careers in Formal Verification: Artificial Intelligence (25)
- Meta or Community Building related - 2021 Q2 - Event - Organisation - organising an event for AI Philadelphia, inviting anti-aging expert Jack Harley
- Meta or Community Building related - 2021 Q2 - Post (EA/LW/AF) - What am I fighting for? (8)
- Meta or Community Building related - 2021 Q2 - Post (EA/LW/AF) - Cliffnotes to Craft of Research parts I, II, and III
- X-Risks - 2021 Q2 - Post (EA/LW/AF) - Transcript: EA Philly's Infodemics Event Part 2: Aviv Ovadya (8)
- AI Alignment - 2021 Q1 - Podcast - Technical AI Safety Podcast Episode 2
- AI Alignment - 2021 Q1 - Podcast - Technical AI Safety Podcast
- AI Alignment - 2021 Q1 - Podcast - Technical AI Safety Podcast Episode 1
- AI Alignment - 2021 Q1 - Podcast - Technical AI Safety Podcast Episode 3
Rafe Kennedy
- AI Alignment - 2019 Q4 - Placement - Job - A job at the Machine Intelligence Research Institute (MIRI) (following a 3 month work trial)
RAISE
- AI Alignment - 2019 Q1 - Course Module produced - Nearly the entirety of this online course was created by grantees
- AI Alignment - 2019 Q1 - Course Module produced - Nearly the entirety of this online course was created by grantees
- AI Alignment - 2019 Q1 - Course Module produced - Nearly the entirety of this online course was created by grantees
Rhys Southan
- Animal Welfare related - 2020 Q3 - Placement - PhD - Applied to a number of PhD programs in Philosophy and took up a place at Oxford University (Researching “Personal Identity, Value, and Ethics for Animals and AIs”) [40%]
- Animal Welfare related - 2020 Q1 - Other writing - Talk - Accepted to give a talk and a poster to the academic session of EAG 2020 in San Francisco
- Animal Welfare related - 2019 Q4 - Paper (Submitted) - Wrote an academic philosophy essay about a problem for David Benatar’s pessimism about life and death, and submitted it to an academic journal. [10%]
- Animal Welfare related - 2019 Q4 - Placement - Job - “I got a paid job writing an index for a book by a well-known moral philosopher. This job will help me continue to financially contribute to the EA Hotel [CEEALAR].” [20%]
- Animal Welfare related - 2019 Q3 - Other writing - Essay - Published an essay, Re-Orientation, about some of the possible personal and societal implications of sexual orientation conversion drugs that actually work [90%]
- Animal Welfare related - 2019 Q3 - Placement - Job - Edited and partially rewrote a book on meat, treatment of farmed animals, and alternatives to factory farming (as a paid job [can’t yet name the book or its author due to non-disclosure agreement]) [70%]
Samuel Knoche
- AI Alignment - 2022 Q2 - Placement - Job - Becoming a Contractor for Open AI
- AI Alignment - 2021 Q2 - Code - Creating code [1%]
- Meta or Community Building related - 2020 Q3 - Post (EA/LW/AF) - The Best Educational Institution in the World [1%] (11)
- Meta or Community Building related - 2020 Q3 - Post (EA/LW/AF) - The Case for Education [1%] (16)
- Meta or Community Building related - 2020 Q3 - Post (EA/LW/AF) - Blogs I’ve Been Reading [1%]
- Meta or Community Building related - 2020 Q3 - Post (EA/LW/AF) - Books I’ve Been Reading [1%]
- Meta or Community Building related - 2020 Q3 - Post (EA/LW/AF) - List of Peter Thiel’s Online Writings [5%]
- Meta or Community Building related - 2020 Q2 - Other writing - Blog - Students are Employees [1%]
- Meta or Community Building related - 2020 Q2 - Other writing - Blog - My Quarantine Reading List [1%]
- Meta or Community Building related - 2020 Q2 - Other writing - Blog - The End of Education [1%]
- Meta or Community Building related - 2020 Q2 - Other writing - Blog - How Steven Pinker Could Become Really Rich [1%]
- Meta or Community Building related - 2020 Q2 - Other writing - Blog - The Public Good of Education [1%]
- Meta or Community Building related - 2020 Q2 - Other writing - Blog - Trillion Dollar Bills on the Sidewalk [1%]
- Meta or Community Building related - 2020 Q2 - Other writing - Blog - On Schools and Churches [1%]
- Meta or Community Building related - 2020 Q2 - Other writing - Blog - Questions to Guide Life and Learning [1%]
- Meta or Community Building related - 2020 Q2 - Other writing - Blog - Online Standardized Tests Are a Bad Idea [1%]
- Meta or Community Building related - 2020 Q2 - Other writing - Blog - Harari on Religions and Ideologies [1%]
- Meta or Community Building related - 2020 Q2 - Other writing - Blog - The Single Best Policy to Combat Climate Change [1%]
- Meta or Community Building related - 2020 Q2 - Post (EA/LW/AF) - Patrick Collison on Effective Altruism [1%] (74)
- Meta or Community Building related - 2020 Q1 - Other writing - Blog - What I Learned Dropping Out of High School [1%]
- AI Alignment - 2019 Q4 - Code - Code for Style Transfer, Deep Dream and Pix2Pix implementation [5%]
- AI Alignment - 2019 Q4 - Code - NLP implementations [5%]
- AI Alignment - 2019 Q4 - Code - Code for lightweight Python deep learning library [5%]
- Meta or Community Building related - 2019 Q4 - Code - Lottery Ticket Hypothesis [5%]
Saulius Šimčikas
- Animal Welfare related - 2019 Q1 - Post (EA/LW/AF) - Will companies meet their animal welfare commitments? (winner of 3rd place EA Forum Prize for Feb 2019) [96%] (112)
- Animal Welfare related - 2019 Q1 - Post (EA/LW/AF) - Rodents farmed for pet snake food [99%] (73)
Severin Seehrich
- Meta or Community Building - 2022 Q1 - Event - Organisation - Organised Summer Solstice Celebration event for EAs
- Meta or Community Building - 2022 Q1 - Post (EA/LW/AF) - The Berlin Hub: Longtermist co-living space (plan) (78)
- Meta or Community Building - 2022 Q1 - Project - Designed The Berlin Longtermist Hub
Simmo Simpson
- Meta or Community Building related - 2022 Q1 - Placement - Job - Job placement as Executive Assistant to COO of Alvea.
- Meta or Community Building related - 2021 Q4 - Course - Became a Association of Coaching accredited coach
Theo Knopfer
- x-Risks - 2022 Q2 - Event - Organisation - A grant from CEA to organise retreats on x-risk in France
- X-Risks - 2022 Q2 - Course - Accepted into CHERI Summer Fellowship
- AI Alignment - 2022 Q1 - Course - AGISF
- X-Risks - 2022 Q1 - Course - https://www.trainingforgood.com/policy-careers-europe
- X-Risks - 2022 Q1 - Course - Weapons of Mass Destruction by IEGA
Toon Alfrink
- Meta or Community Building related - 2019 Q1 - Post (EA/LW/AF) - EA is vetting-constrained [10%] (125)
- Meta or Community Building related - 2019 Q1 - Post (EA/LW/AF) - Task Y: representing EA in your field [90%] (11)
- Meta or Community Building related - 2019 Q1 - Post (EA/LW/AF) - What makes a good culture? [90%] (29)
- Meta or Community Building related - 2019 Q1 - Post (EA/LW/AF) - The Home Base of EA [90%]
- Meta or Community Building related - 2018 Q4 - Post (EA/LW/AF) - The housekeeper [10%] (23)
- Meta or Community Building related - 2018 Q4 - Post (EA/LW/AF) - We can all be high status [10%] (56)
Vinay Hiremath
- AI Alignment - 2022 Q2 - Placement - Job - Job placement as Teaching Assistant for MLAB (Machine Learning Alignment Bootcamp)
- AI Alignment - 2022 Q1 - Course - AGISF
- Meta or Community Building related - 2022 Q1 - Project - Built website for SERI conference
Code
Course
Course/event
Course Module produced
Event
Event - AI Safety
Event - EA Retreat
Event - Organisation
Event - Rationality Workshop
Event - X-risk
Key:
- Cause Area - Quarter - Name of Grantee - Title with link[C%*] (K**)
*C% = percentage counterfactual likelihood of happening without CEEALAR.
**K = Karma on EA Forum, Less Wrong, (Less Wrong; Alignment Forum).
Code
- AI Alignment - 2021 Q4 - Jaeson Booker - AI Policy Simulator
- AI Alignment - 2021 Q2 - Samuel Knoche - Creating code [1%]
- AI Alignment - 2019 Q4 - Samuel Knoche - Code for Style Transfer, Deep Dream and Pix2Pix implementation [5%]
- AI Alignment - 2019 Q4 - Samuel Knoche - NLP implementations [5%]
- AI Alignment - 2019 Q4 - Samuel Knoche - Code for lightweight Python deep learning library [5%]
- Meta or Community Building related - 2019 Q4 - Samuel Knoche - Lottery Ticket Hypothesis [5%]
- Meta or Community Building related - 2019 Q1 - Matt Goldenberg - The entirety of Project Metis [5%]
Course
- AI Alignment - 2022 Q2 - Aaron Maiwald - courses on mathematics part of his bachelor's degree in Cognitive Science
- AI Alignment - 2022 Q2 - Aaron Maiwald - courses on ML part of his bachelor's degree in Cognitive Science
- Meta or Community Building - 2022 Q2 - Luminita Bogatean - First out of three courses in the UX Programme by CareerFoundry
- X-Risks - 2022 Q2 - Theo Knopfer - Accepted into CHERI Summer Fellowship
- AI Alignment - 2022 Q1 - David King - AGISF
- AI Alignment - 2022 Q1 - Jaeson Booker - AGISF
- AI Alignment - 2022 Q1 - Michele Campolo -
Attended online course on moral psychology by the University of Warwick.
- AI Alignment - 2022 Q1 - Theo Knopfer - AGISF
- AI Alignment - 2022 Q1 - Vinay Hiremath - AGISF
- X-Risks - 2022 Q1 - Theo Knopfer - https://www.trainingforgood.com/policy-careers-europe
- X-Risks - 2022 Q1 - Theo Knopfer - Weapons of Mass Destruction by IEGA
- Meta or Community Building related - 2021 Q4 - Simmo Simpson - Became a Association of Coaching accredited coach
- Global Health and Development related - 2021 Q3 - Nick Stares - CORE The Economy
- Meta or Community Building related - 2021 Q2 - Denisa Pop - Introduction into Group Facilitation
- AI Alignment - 2020 Q3 - Luminita Bogatean - enrolled in the Open University’s bachelor degree Computing & IT and Design [20%]
- Global Health and Development related - 2020 Q3 - Kris Gulati - Audited M208 (Pure Maths) Linear Algebra and Real Analysis, The Open University
- Global Health and Development related - 2020 Q1 - Kris Gulati - Distinctions in M140 (Statistics), The Open University
- Global Health and Development related - 2020 Q1 - Kris Gulati - Distinctions in MST125 (Mathematics), The Open University
- AI Alignment - 2019 Q4 - Luminita Bogatean - Course: Python Programming: A Concise Introduction [20%]
- Global Health and Development related - 2019 Q4 - Kris Gulati - Completed MA100 (Mathematical Methods)[auditing module], London School of Economics
- Global Health and Development related - 2019 Q4 - Kris Gulati - Completed GV100 (Intro to Political Theory)[auditing module], London School of Economics
- Global Health and Development related - 2019 Q4 - Kris Gulati - Completed ‘Justice’ (Harvard MOOC; Verified Certificate)
- Global Health and Development related - 2019 Q3 - Kris Gulati - Distinction in MU123 (Mathematics), The Open University
- Global Health and Development related - 2019 Q3 - Kris Gulati - Distinctions in MST124 (Mathematics), The Open University
- AI Alignment - 2019 Q1 - Anonymous 1 - MITx Probability
- AI Alignment - 2019 Q1 - Anonymous 1 - Model Thinking
- AI Alignment - 2019 Q1 - Anonymous 1 - Probabilistic Graphical Models
- AI Alignment - 2019 Q1 - John Maxwell - MITx Probability
- AI Alignment - 2019 Q1 - John Maxwell - Improving Your Statistical Inferences (21 hours)
- AI Alignment - 2019 Q1 - John Maxwell - Statistical Learning
- AI Alignment - 2018 Q4 - John Maxwell - ARIMA Modeling with R
- AI Alignment - 2018 Q4 - John Maxwell - Introduction to Recommender Systems (20-48 hours)
- AI Alignment - 2018 Q4 - John Maxwell - Formal Software Verification
- AI Alignment - 2018 Q4 - John Maxwell - Text Mining and Analytics
- AI Alignment - 2018 Q3 - John Maxwell - Introduction to Time Series Analysis
- AI Alignment - 2018 Q3 - John Maxwell - Regression Models
Course/event
- AI Alignment - 2021 Q2 - Quinn Dougherty - AI Safety Camp
Course Module produced
- AI Alignment - 2019 Q1 - RAISE - Nearly the entirety of this online course was created by grantees
- AI Alignment - 2019 Q1 - RAISE - Nearly the entirety of this online course was created by grantees
- AI Alignment - 2019 Q1 - RAISE - Nearly the entirety of this online course was created by grantees
Event
- AI Alignment - 2021 Q3 - Michele Campolo - Attended Good AI Badger Seminar: Beyond Life-long Learning via Modular Meta-Learning
Event - AI Safety
- AI Alignment - 2019 Q4 - - AI Safety Learning By Doing Workshop (October 2019)
- AI Alignment - 2019 Q3 - - AI Safety Learning By Doing Workshop (August 2019)
- AI Alignment - 2019 Q3 - - AI Safety Technical Unconference (August 2019) (retrospective written by a participant)
Event - EA Retreat
- Meta or Community Building related - 2019 Q1 - - EA Glasgow (March 2019)
- Meta or Community Building related - 2018 Q3 - - EA London Retreats: Life Review Weekend (Aug. 24th – 27th 2018); Careers Week (Aug. 27th – 31st 2018); Holiday/EA Unconference (Aug. 31st – Sept. 3rd 2018)
Event - Organisation
- x-Risks - 2022 Q2 - Theo Knopfer - A grant from CEA to organise retreats on x-risk in France
- Meta or Community Building - 2022 Q1 - Severin Seehrich - Organised Summer Solstice Celebration event for EAs
- Meta or Community Building related - 2021 Q2 - Quinn Dougherty - organising an event for AI Philadelphia, inviting anti-aging expert Jack Harley
- AI Alignment - 2019 Q4 - Linda Linsefors - Organized the AI Safety Learning By Doing Workshop (August and October 2019)
- X-Risks - 2019 Q4 - David Kristoffersson - Organised AI Strategy and X-Risk Unconference (AIXSU) [1%] (5)
- AI Alignment - 2019 Q3 - Linda Linsefors - Organized the AI Safety Technical Unconference (August 2019) (retrospective written by a participant) (36)
- AI Alignment - 2019 Q3 - Linda Linsefors - Organized the AI Safety Learning By Doing Workshop (August and October 2019)
- Meta or Community Building related - 2019 Q2 - Matt Goldenberg - Organizer and instructor for the Athena Rationality Workshop (June 2019)
- Meta or Community Building related - 2019 Q1 - Denisa Pop - Helped organise the EA Values-to-Actions Retreat [33%]
- Meta or Community Building related - 2019 Q1 - Denisa Pop - Helped organise the EA Community Health Unconference [33%]
Event - Rationality Workshop
- Meta or Community Building related - 2019 Q2 - - Athena Rationality Workshop (June 2019) (retrospective) (24)
Event - X-risk
- AI Alignment - 2019 Q4 - - AI Strategy and X-Risk Unconference (AIXSU)
Idea
- Animal Welfare related - 2019 Q3 - Magnus Vinding - “I got the idea to write the book I’m currently writing (“Suffering-Focused Ethics”)”. [50%]
Other writing - Blog
- Meta or Community Building related - 2020 Q2 - Samuel Knoche - Students are Employees [1%]
- Meta or Community Building related - 2020 Q2 - Samuel Knoche - My Quarantine Reading List [1%]
- Meta or Community Building related - 2020 Q2 - Samuel Knoche - The End of Education [1%]
- Meta or Community Building related - 2020 Q2 - Samuel Knoche - How Steven Pinker Could Become Really Rich [1%]
- Meta or Community Building related - 2020 Q2 - Samuel Knoche - The Public Good of Education [1%]
- Meta or Community Building related - 2020 Q2 - Samuel Knoche - Trillion Dollar Bills on the Sidewalk [1%]
- Meta or Community Building related - 2020 Q2 - Samuel Knoche - On Schools and Churches [1%]
- Meta or Community Building related - 2020 Q2 - Samuel Knoche - Questions to Guide Life and Learning [1%]
- Meta or Community Building related - 2020 Q2 - Samuel Knoche - Online Standardized Tests Are a Bad Idea [1%]
- Meta or Community Building related - 2020 Q2 - Samuel Knoche - Harari on Religions and Ideologies [1%]
- Meta or Community Building related - 2020 Q2 - Samuel Knoche - The Single Best Policy to Combat Climate Change [1%]
- Meta or Community Building related - 2020 Q1 - Samuel Knoche - What I Learned Dropping Out of High School [1%]
Other writing - Book Chapter
- Global Health and Development related - 2019 Q1 - Derek Foster - Priority Setting in Healthcare Through the Lens of Happiness – Chapter 3 of the 2019 Global Happiness & Wellbeing Policy Report [99%]
Other writing - comment
- Animal Welfare related - 2019 Q3 - Nix Goldowsky-Dill - EA Forum Comment Prize ($50), July 2019, for “comments on the impact of corporate cage-free campaigns” (11)
Other writing - Essay
- Animal Welfare related - 2019 Q3 - Rhys Southan - Published an essay, Re-Orientation, about some of the possible personal and societal implications of sexual orientation conversion drugs that actually work [90%]
- Animal Welfare related - 2018 Q3 - Magnus Vinding - Why Altruists Should Perhaps Not Prioritize Artificial Intelligence: A Lengthy Critique [99%]
Other writing - Project
- Global Health and Development related - 2020 Q2 - Derek Foster - Some of the content of https://1daysooner.org/ [70%]
- Global Health and Development related - 2020 Q2 - Derek Foster - Various unpublished documents for the Happier Lives Institute [80%]
- X-Risks - 2020 Q2 - Aron Mill - Helped Launch the Food Systems Handbook (announcement) (30)
- Global Health and Development related - 2020 Q1 - Derek Foster - A confidential evaluation of an anxiety app [90%]
Other writing - Report
- Global Health and Development related - 2020 Q2 - Derek Foster - Some confidential COVID-19-related policy reports [70%]
Other writing - Talk
- Animal Welfare related - 2020 Q1 - Rhys Southan - Accepted to give a talk and a poster to the academic session of EAG 2020 in San Francisco
- Meta or Community Building related - 2019 Q2 - Denisa Pop - Researched and developing presentations and workshops in Rational Compassion: see How we might save the world by becoming super-dogs [0%]
- X-Risks - 2019 Q1 - David Kristoffersson - Designed Convergence presentation (slides, notes) and held it at the Future of Humanity Institute [80%]
Paper (Preprint)
- Global Health and Development related - 2020 Q3 - Derek Foster - Option-based guarantees to accelerate urgent, high risk vaccines: A new market-shaping approach [Preprint] [70%]
- Global Health and Development related - 2020 Q2 - Derek Foster - Modelling the Health and Economic Impacts of Population-Wide Testing, Contact Tracing and Isolation (PTTI) Strategies for COVID-19 in the UK [Preprint]
Paper (Published)
- Global Health and Development related - 2020 Q3 - Derek Foster - Evaluating use cases for human challenge trials in accelerating SARS-CoV-2 vaccine development. Clinical Infectious Diseases. [70%]
- Global Health and Development related - 2019 Q4 - Anders Huitfeldt - Scientific Article: Huitfeldt, A., Swanson, S. A., Stensrud, M. J., & Suzuki, E. (2019). Effect heterogeneity and variable selection for standardizing causal effects to a target population. European Journal of Epidemiology.
- X-Risks - 2019 Q4 - Markus Salmela - Joined the design team for the upcoming AI Strategy role-playing game Intelligence Rising and organised a series of events for testing the game [15%]
- AI Alignment - 2019 Q3 - Davide Zagami - Coauthored the paper Categorizing Wireheading in Partially Embedded Agents, and presented a poster at the AI Safety Workshop in IJCAI 2019 [15%]
- X-Risks - 2019 Q1 - Markus Salmela - Coauthored the paper Long-Term Trajectories of Human Civilization [99%]
Paper (Revising)
- Animal Welfare related - 2019 Q3 - Magnus Vinding - Revising journal paper for Between the Species. (“Got feedback and discussion about it I couldn’t have had otherwise; one reviewer happened to be a guest at the hotel [CEEALAR].”)
Paper (Submitted)
- Animal Welfare related - 2019 Q4 - Rhys Southan - Wrote an academic philosophy essay about a problem for David Benatar’s pessimism about life and death, and submitted it to an academic journal. [10%]
Placement - Internship
- AI Alignment - 2022 Q2 - Lucas Teixeira - Internship at Conjecture
- AI Alignment - 2022 Q1 - Peter Barnett - Accepted for an internship at CHAI
- AI Alignment - 2022 Q1 - Peter Barnett - Accepted into SERI ML Alignment Theory Scholars program
- AI Alignment - 2021 Q2 - Quinn Dougherty - Obtained internship at SERI
- Meta or Community Building related - 2020 Q3 - Denisa Pop - Incubatee and graduate of Charity Entrepreneurship 2020 Incubation Program [50%]
- Meta or Community Building related - 2020 Q2 - Denisa Pop - Interned as a mental health research analyst at Charity Entrepreneurship [50%]
- Animal Welfare related - 2018 Q4 - Frederik Bechtold - Received an (unpaid) internship at Animal Ethics [1%]
Placement - Job
- AI Alignment - 2022 Q2 - Samuel Knoche - Becoming a Contractor for Open AI
- AI Alignment - 2022 Q2 - Vinay Hiremath - Job placement as Teaching Assistant for MLAB (Machine Learning Alignment Bootcamp)
- Meta or Community Building - 2022 Q2 - Laura C - Becoming Operations Lead for The Berlin Longtermist Hub [50%]
- Meta or Community Building related - 2022 Q1 - Simmo Simpson - Job placement as Executive Assistant to COO of Alvea.
- Meta or Community Building related - 2021 Q2 - Anonymous - Obtained job for the EA org Momentum
- Meta or Community Building related - 2020 Q2 - Luminita Bogatean - Becoming Operations Manager at CEEALAR
- AI Alignment - 2019 Q4 - Rafe Kennedy - A job at the Machine Intelligence Research Institute (MIRI) (following a 3 month work trial)
- Animal Welfare related - 2019 Q4 - Rhys Southan - “I got a paid job writing an index for a book by a well-known moral philosopher. This job will help me continue to financially contribute to the EA Hotel [CEEALAR].” [20%]
- Animal Welfare related - 2019 Q3 - Rhys Southan - Edited and partially rewrote a book on meat, treatment of farmed animals, and alternatives to factory farming (as a paid job [can’t yet name the book or its author due to non-disclosure agreement]) [70%]
- Global Health and Development related - 2018 Q4 - Derek Foster - Hired as a research analyst for Rethink Priorities [95%]
- Animal Welfare related - 2018 Q3 - Max Carpendale - Got a research position (part-time) at Animal Ethics [25%]
Placement - PhD
- Animal Welfare related - 2020 Q3 - Rhys Southan - Applied to a number of PhD programs in Philosophy and took up a place at Oxford University (Researching “Personal Identity, Value, and Ethics for Animals and AIs”) [40%]
- Global Health and Development related - 2020 Q2 - Kris Gulati - Applied to a number of PhD programmes in Economics, and took up a place at Glasgow University
Podcast
- Meta or Community Building related - 2021 Q4 - Aaron Maiwald - 70% of the work for the Gutes Einfach Tun Podcast Episodes : 3, 4 ,5 and 6
- Meta or Community Building related - 2021 Q3 - Aaron Maiwald - Gutes Einfach Tun Podcast lauch & Episode 1
- Meta or Community Building related - 2021 Q3 - Aaron Maiwald - Gutes Einfach Tun Podcast lauch & Episode 2
- AI Alignment - 2021 Q2 - Quinn Dougherty - Multi-agent Reinforcement Learning in Sequential Social Dilemmas
- AI Alignment - 2021 Q1 - Quinn Dougherty - Technical AI Safety Podcast Episode 2
- AI Alignment - 2021 Q1 - Quinn Dougherty - Technical AI Safety Podcast
- AI Alignment - 2021 Q1 - Quinn Dougherty - Technical AI Safety Podcast Episode 1
- AI Alignment - 2021 Q1 - Quinn Dougherty - Technical AI Safety Podcast Episode 3
Post (EA/LW/AF)
- AI Alignment - 2022 Q2 - Michele Campolo - Some alternative AI safety research projects (8; 2)
- AI Alignment - 2022 Q1 - Peter Barnett - Thoughts on Dangerous Learned Optimization (4)
- AI Alignment - 2022 Q1 - Peter Barnett - Alignment Problems All the Way Down (25)
- Meta or Community Building - 2022 Q1 - Severin Seehrich - The Berlin Hub: Longtermist co-living space (plan) (78)
- AI Alignment - 2021 Q4 - Charlie Steiner - Reducing Goodhart sequence (115; 71)
- AI Alignment - 2021 Q4 - Michele Campolo - From language to ethics by automated reasoning (2; 2)
- AI Alignment - 2021 Q2 - Michele Campolo - Naturalism and AI alignment (10; 5)
- AI Alignment - 2021 Q2 - Quinn Dougherty - High Impact Careers in Formal Verification: Artificial Intelligence (25)
- Meta or Community Building related - 2021 Q2 - Quinn Dougherty - What am I fighting for? (8)
- Meta or Community Building related - 2021 Q2 - Quinn Dougherty - Cliffnotes to Craft of Research parts I, II, and III
- X-Risks - 2021 Q2 - Quinn Dougherty - Transcript: EA Philly's Infodemics Event Part 2: Aviv Ovadya (8)
- AI Alignment - 2021 Q1 - Michele Campolo - Contribution to Literature Review on Goal-Directedness [20%] (58; 30)
- AI Alignment - 2020 Q3 - Michele Campolo - Decision Theory is multifaceted [25%] (6; 4)
- AI Alignment - 2020 Q3 - Michele Campolo - Goals and short descriptions [25%] (14; 7)
- AI Alignment - 2020 Q3 - Michele Campolo - Postponing research can sometimes be the optimal decision [25%] (28)
- Meta or Community Building related - 2020 Q3 - Samuel Knoche - The Best Educational Institution in the World [1%] (11)
- Meta or Community Building related - 2020 Q3 - Samuel Knoche - The Case for Education [1%] (16)
- Meta or Community Building related - 2020 Q3 - Samuel Knoche - Blogs I’ve Been Reading [1%]
- Meta or Community Building related - 2020 Q3 - Samuel Knoche - Books I’ve Been Reading [1%]
- Meta or Community Building related - 2020 Q3 - Samuel Knoche - List of Peter Thiel’s Online Writings [5%]
- X-Risks - 2020 Q3 - David Kristoffersson - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q3 - David Kristoffersson - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q3 - David Kristoffersson - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q3 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q3 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q3 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q3 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q3 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q3 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- AI Alignment - 2020 Q2 - Michele Campolo - Contributions to the sequence Thoughts on Goal-Directedness [25%] (58; 30)
- Global Health and Development related - 2020 Q2 - Derek Foster - Pueyo: How to Do Testing and Contact Tracing [Summary] [70%] (7)
- Global Health and Development related - 2020 Q2 - Derek Foster - Market-shaping approaches to accelerate COVID-19 response: a role for option-based guarantees? [70%] (38)
- Meta or Community Building related - 2020 Q2 - Samuel Knoche - Patrick Collison on Effective Altruism [1%] (74)
- X-Risks - 2020 Q2 - Aron Mill - Food Crisis - Cascading Events from COVID-19 & Locusts (97)
- X-Risks - 2020 Q2 - David Kristoffersson - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q2 - David Kristoffersson - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q2 - David Kristoffersson - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q2 - David Kristoffersson - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q2 - David Kristoffersson - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q2 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q2 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q2 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q2 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q2 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q2 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- AI Alignment - 2020 Q1 - Michele Campolo - Wireheading and discontinuity [25%] (21; 11)
- X-Risks - 2020 Q1 - David Kristoffersson - State Space of X-Risk Trajectories [95%] (24)
- X-Risks - 2020 Q1 - David Kristoffersson - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q1 - David Kristoffersson - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q1 - David Kristoffersson - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q1 - David Kristoffersson - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q1 - David Kristoffersson - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q1 - David Kristoffersson - Collaborated on 14 other Convergence publications [97%] (10)
- X-Risks - 2020 Q1 - David Kristoffersson - The ‘far future’ is not just the far future [99%] (29)
- X-Risks - 2020 Q1 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q1 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q1 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q1 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q1 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q1 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q1 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q1 - Justin Shovelain - Published or provided the primarily ideas for and directed the publishing of 19 EA/LW forum posts (see our publications document for more detail) [80%] (10)
- X-Risks - 2020 Q1 - Michael Aird - Four components of strategy research [50%] (21)
- X-Risks - 2020 Q1 - Michael Aird - Using vector fields to visualise preferences and make them consistent [90%] (39; 7)
- X-Risks - 2020 Q1 - Michael Aird - Value uncertainty [95%] (16)
- AI Alignment - 2019 Q4 - Anonymous 2 - Some Comments on “Goodhart Taxonomy” [1%] (9; 4)
- AI Alignment - 2019 Q4 - Anonymous 2 - What are we assuming about utility functions? [1%] (17; 9)
- AI Alignment - 2019 Q4 - Anonymous 2 - Critiquing “What failure looks like” (featured in MIRI’s Jan 2020 Newsletter) [1%] (35; 17)
- AI Alignment - 2019 Q4 - Anonymous 2 - 8 AIS ideas [1%]
- AI Alignment - 2019 Q4 - Michele Campolo - Thinking of tool AIs [0%] (6)
- Global Health and Development related - 2019 Q4 - Anders Huitfeldt - Post on EA Forum: Effect heterogeneity and external validity (6)
- Global Health and Development related - 2019 Q4 - Anders Huitfeldt - Post on LessWrong: Effect heterogeneity and external validity in medicine (49)
- AI Alignment - 2019 Q3 - Anonymous 2 - Cognitive Dissonance and Veg*nism (7)
- AI Alignment - 2019 Q3 - Anonymous 2 - Non-anthropically, what makes us think human-level intelligence is possible? (9)
- AI Alignment - 2019 Q3 - Anonymous 2 - What are concrete examples of potential “lock-in” in AI research? (17; 11)
- AI Alignment - 2019 Q3 - Anonymous 2 - Distance Functions are Hard (31; 11)
- AI Alignment - 2019 Q3 - Anonymous 2 - The Moral Circle is not a Circle (36)
- Animal Welfare related - 2019 Q3 - Max Carpendale - Interview with Michael Tye about invertebrate consciousness [50%] (32)
- Animal Welfare related - 2019 Q3 - Max Carpendale - My recommendations for gratitude exercises [50%] (39)
- Global Health and Development related - 2019 Q3 - Derek Foster - Rethink Grants: an evaluation of Donational’s Corporate Ambassador Program [95%] (54)
- Animal Welfare related - 2019 Q2 - Max Carpendale - My recommendations for RSI treatment [25%] (69)
- Animal Welfare related - 2019 Q2 - Max Carpendale - Thoughts on the welfare of farmed insects [50%] (33)
- Animal Welfare related - 2019 Q2 - Max Carpendale - Interview with Shelley Adamo about invertebrate consciousness [50%] (37)
- Animal Welfare related - 2019 Q2 - Max Carpendale - Interview with Jon Mallatt about invertebrate consciousness (winner of 1st place EA Forum Prize for Apr 2019) [50%] (82)
- AI Alignment - 2019 Q1 - Chris Leong - Deconfusing Logical Counterfactuals [75%] (25; 6)
- AI Alignment - 2019 Q1 - Chris Leong - Debate AI and the Decision to Release an AI [90%] (9)
- AI Alignment - 2019 Q1 - Davide Zagami - RAISE lessons on Inverse Reinforcement Learning + their Supplementary Material [1%] (20)
- AI Alignment - 2019 Q1 - Davide Zagami - RAISE lessons on Fundamentals of Formalization [5%] (28)
- AI Alignment - 2019 Q1 - Linda Linsefors - The Game Theory of Blackmail (“I don’t remember where the ideas behind this post came from, so it is hard for me to say what the counterfactual would have been. However, I did get help improving the post from other residents, so it would at least be less well written without the hotel.“) (24; 6)
- AI Alignment - 2019 Q1 - Linda Linsefors - Optimization Regularization through Time Penalty (“This post resulted from conversations at the EA Hotel [CEEALAR] and would not therefore not have happened without the hotel.”) [0%] (11; 7)
- Animal Welfare related - 2019 Q1 - Max Carpendale - Sharks probably do feel pain: a reply to Michael Tye and others [50%] (21)
- Animal Welfare related - 2019 Q1 - Max Carpendale - The Evolution of Sentience as a Factor in the Cambrian Explosion: Setting up the Question [50%] (29)
- Animal Welfare related - 2019 Q1 - Saulius Šimčikas - Will companies meet their animal welfare commitments? (winner of 3rd place EA Forum Prize for Feb 2019) [96%] (112)
- Animal Welfare related - 2019 Q1 - Saulius Šimčikas - Rodents farmed for pet snake food [99%] (73)
- Meta or Community Building related - 2019 Q1 - Matt Goldenberg - What Vibing Feels Like [5%] (19)
- Meta or Community Building related - 2019 Q1 - Matt Goldenberg - A Framework for Internal Debugging [5%] (41)
- Meta or Community Building related - 2019 Q1 - Matt Goldenberg - How to Understand and Mitigate Risk [5%] (55)
- Meta or Community Building related - 2019 Q1 - Matt Goldenberg - S-Curves for Trend Forecasting [5%] (99)
- Meta or Community Building related - 2019 Q1 - Matt Goldenberg - The 3 Books Technique for Learning a New Skill [5%] (175)
- Meta or Community Building related - 2019 Q1 - Toon Alfrink - EA is vetting-constrained [10%] (125)
- Meta or Community Building related - 2019 Q1 - Toon Alfrink - Task Y: representing EA in your field [90%] (11)
- Meta or Community Building related - 2019 Q1 - Toon Alfrink - What makes a good culture? [90%] (29)
- Meta or Community Building related - 2019 Q1 - Toon Alfrink - The Home Base of EA [90%]
- AI Alignment - 2018 Q4 - Anonymous 1 - Believing others’ priors (8)
- AI Alignment - 2018 Q4 - Anonymous 1 - AI development incentive gradients are not uniformly terrible (21)
- AI Alignment - 2018 Q4 - Anonymous 1 - Should donor lottery winners write reports? (29)
- AI Alignment - 2018 Q4 - Chris Leong - On Abstract Systems [50%] (14)
- AI Alignment - 2018 Q4 - Chris Leong - Summary: Surreal Decisions [50%] (24)
- AI Alignment - 2018 Q4 - Chris Leong - On Disingenuity [50%] (28)
- AI Alignment - 2018 Q4 - Chris Leong - An Extensive Categorisation of Infinite Paradoxes [80%] [80%] (-9)
- Animal Welfare related - 2018 Q4 - Max Carpendale - Why I’m focusing on invertebrate sentience [75%] (53)
- Meta or Community Building related - 2018 Q4 - Toon Alfrink - The housekeeper [10%] (23)
- Meta or Community Building related - 2018 Q4 - Toon Alfrink - We can all be high status [10%] (56)
- AI Alignment - 2018 Q3 - Anonymous 1 - Annihilating aliens & Rare Earth suggest early filter (8)
Project
- Meta or Community Building - 2022 Q1 - Severin Seehrich - Designed The Berlin Longtermist Hub
- Meta or Community Building related - 2022 Q1 - Denisa Pop - $9,000 grant from the Effective Altruism Infrastructure Fund to organize group activities for improving self-awareness and communication in EAs [50%]
- Meta or Community Building related - 2022 Q1 - Vinay Hiremath - Built website for SERI conference
- Meta or Community Building related - 2021 Q2 - Jack Harley - Created Longevity wiki and expanded the team to 8 members
- Global Health and Development related - 2020 Q2 - Derek Foster - Parts of the Survey of COVID-19 Responses to Understand Behaviour (SCRUB)
- X-Risks - 2019 Q3 - David Kristoffersson - Applied for 501c3 non-profit status for Convergence [non-profit status approved in 2019] [95%]
- X-Risks - 2019 Q3 - Justin Shovelain - Got non-profit status for Convergence Analysis and established it legally [90%]
- Meta or Community Building related - 2019 Q2 - Denisa Pop - Becoming Interim Community & Projects Manager at CEEALAR and offering residents counseling/coaching sessions (productivity & mental health) [0%]
- X-Risks - 2019 Q1 - David Kristoffersson - Defined a recruitment plan for a researcher-writer role and publicized a job ad [90%]
- X-Risks - 2019 Q1 - David Kristoffersson - Built new website for Convergence [90%]
- X-Risks - 2018 Q4 - David Kristoffersson - Incorporated Convergence [95%]
Statement
- Meta or Community Building related - 2020 Q4 - CEEALAR - We had little in the way of concrete outputs this quarter due to diminished numbers (pandemic lockdown)
- Global Health and Development related - 2020 Q3 - Kris Gulati - “All together I spent approximately 9/10 months in total at the Hotel [CEEALAR] (I had appendicitis and had a few breaks during my stay). The time at the Hotel was incredibly valuable to me. I completed the first year of a Maths degree via The Open University (with Distinction). On top of this, I self-studied Maths and Statistics (a mixture of Open University and MIT Opencourseware resources), covering pre-calculus, single-variable calculus, multivariable calculus, linear algebra, real analysis, probability theory, and statistical theory/applied statistics. This provided me with the mathematics/statistics knowledge to complete the coursework components at top-tier Economics PhD programmes.The Hotel [CEEALAR] also gave me the time to apply for PhD programmes. Sadly, I didn’t succeed in obtaining scholarships for my target school – The London School of Economics. However, I did receive a fully funded offer to study a two-year MRes in Economics at The University of Glasgow. Conditional upon doing well at Glasgow, the two-year MRes enables me to apply to top-tier PhD programmes afterwards. During my stay, I worked on some academic research (my MSc thesis, and an old anthropology paper), which will help my later PhD applications. I applied for a variety of large grants at OpenPhil and other EA organisations (which weren’t successful). I also applied to a fellowship at Wasteland Research (I reached the final round), which I couldn’t follow up on due to other work commitments (although I hope to apply in the future). Finally, I developed a few research ideas while at the Hotel. I’m now working on obtaining data socio-economic data on academic Economists. I’m also planning on running/hosting an experiment that tries to find the most convincing argument for long-termism. These ideas were conceived at the Hotel and I received a lot of feedback/help from current and previous residents. Counterfactually – if I wasn’t at the Hotel [CEEALAR] – I would have probably only been able to complete half of the Maths/Stats I learned. I probably wouldn’t have applied to any of the scholarships/grants/fellowships because I heard about them via residents at the Hotel. I also probably wouldn’t have had time to focus on completing my older research papers. Similarly, discussions with other residents spurred the new research ideas I’m working on.” [0%]
- AI Alignment - 2019 Q3 - Linda Linsefors - “I think the biggest impact EA Hotel did for me, was about self growth. I got a lot of help to improve, but also the time and freedom to explore. I tried some projects that did not lead anywhere, like Code to Give. But getting to explore was necessary for me to figure out what to do. I finally landed on organising, which I’m still doing. AI Safety Support probably would not have existed with out the hotel.” [0%]
Next steps for departing grantees
- Samuel Knoche starts work as a contractor for OpenAI.
- Lucas Teixeira takes up an internship at Conjecture.
- Peter Barnett takes up an internship at the Stanford Existential Risks Initiative.
- Aron Mill enrolls in a Master's program at TU Berlin.
- Quinn Dougherty takes up an internship at the Stanford Existential Risks Initiative.
- Kris Gulati takes up an Economics MRes/PhD at the University of Glasgow.
- Rhys Southan embarks upon a PhD in Philosophy at the University of Oxford.
- Linda Linsefors starts AI Safety Support.
- Davide Zagami goes on to work at AI/computer vision startup HoraVision.
- Chris Leong goes on to work at Triplebyte.
- Ed Wise goes on to complete a Master's in International Relations (cum laude) at Leiden University.
- Rafe Kennedy starts a job at the Machine Intelligence Research Institute (MIRI).
- Hoagy Cunningham took part in the MIRI Summer Fellows program, and goes on to pursue AI policy opportunities as a civil servant in the UK government.