Greg Colbourn has been into EA for years but moved into working on related things full time – including founding and organising this project – relatively recently. Previously, he studied Astrophysics (undergrad), and Earth System Modelling (PhD), and worked on 3D printing/open source hardware (business with a view to EA). He has dabbled in investing (mainly crypto), and studying subjects related to AI Safety, of which he hopes to do more of.
Florent Berthet is a social entrepreneur who cofounded Effective Altruism France. He also founded and is the director of a Sudbury school in Lyon. After teaching entrepreneurship at an engineering school for 3 years, Florent now works full-time at Good Growth, a new initiative that helps high-impact organisations be more effective by connecting them with experts from various fields.
Jonas Vollmer is a co-founder and co-executive director at the Effective Altruism Foundation, a research group and grantmaker focused on preventing s-risks from AI. He holds degrees in medicine and economics with a focus on public choice, health economics, and development economics. He previously served on the boards of several charities, is an advisor to the EA Long-Term Future Fund, and played a key part in establishing the effective altruism community in continental Europe.
Rhys Southan (Guest Representative) — a writer and philosopher with a focus on animal ethics and population ethics. He completed a master’s degree in philosophy at The University of Oxford in 2017. He has been published in the New York Times, Aeon Magazine and Modern Farmer. While here, Rhys is researching and writing on animal ethics. He plans to start a philosophy PhD in 2020.Progress updates Outputs*
Greg Colbourn (Executive Director) — See above. Greg is unsalaried in this role.
Denisa Pop (Interim Community Manager) — is a former counseling psychologist specialized in cognitive-behavioral therapy and has a research background in positive human-animal interaction (PhD). Now she uses her previous knowledge to improve human-human interaction, conducting research in rational compassion and working as Interim Community & Projects Manager. Previously she volunteered as Guest Representative. As a hobby, she enjoys bringing people together through organizing conferences and retreats (such as EAGx Netherlands, Values-to-Actions Retreat, EA Community Health Unconference etc.).
Jacob Coventry-Peters (Operations Manager) — A former languages, then government and politics student at Newcastle University. At only 22 years of age, he brings invaluable experience from working as a floor manager at several high-end salons in London. He hopes to bring a level of professional service to the project that enables grantees to have as much time as possible to devote to their personal projects. A polyglot, he enjoys language learning; he also has ambitions of working at the very high-end of customer service.
Markus Salmela — studies human health, philosophy and social sciences. He has worked on research projects relating to existential risks and long term forecasting, and also organised EA-events. He is currently writing about longevity research from an existential risk perspective.Outputs*
Luminita-Hadasa Bogatean — studies programming and is currently contributing through coding in an EA project, as well as helping with collecting the updates and outputs from the guests and displaying them on the website. During her stay, she hopes to find a balance between her hobbies by offering free haircuts, experimenting with cooking, continuing her self development and building up her coding skills.Progress updates
Samuel Knoche is a writer and an autodidact. He studied Computer Science for two years before becoming disillusioned with the higher education system and dropping out. He explains his decision in “The Case For Dropping out of College” published in Quillette. He is now studying to ultimately contribute to ML and AI safety research. He is also exploring ideas around EA strategy and philosophy.Progress updates Outputs*
David Kristoffersson — “I’m co-founder of Convergence and an existential risk strategy researcher. I have a background as R&D Project Manager and Software Engineer at Ericsson and have worked with FHI. My plan is to define a strategic perspective, research agenda, and research program on the overall question of how to ensure a beneficial future. The intention is to point at structures in research for ensuring a good long-term future, to illuminate important gaps, and to give better language for talking and thinking about these questions. This is a complex undertaking and I only expect an early version of this research structuring to result from it.“Progress updates Outputs*
Justin Shovelain — the founder of the quantitative long term strategy organisation Convergence. Over the last seven years he has worked with MIRI, CFAR, EA Global, Founders Fund, and Leverage, and done work in EA strategy, fundraising, networking, teaching, cognitive enhancement, and AI safety research. He has a MS degree in computer science and BS degrees in computer science, mathematics, and physics.Progress updates
Kris Gulati completed a Politics degree from The University of London: Goldsmiths College. After completing a summer course in Social and Cultural Anthropology at The University of Oxford, he then completed a two-year MSc in Econometrics and Economics (with Distinction) from The University of Nottingham. He has worked as a researcher in various capacities in academia at The London School of Economics, UCL, and The University of Nottingham. Outside of academia he has worked/written for various non-profit organisations including: Development in Action, the UNDP Knowledge and Innovation Team, and The MicroLoan Foundation. He is currently studying towards a degree in Mathematics at the Open University (part-time) and will be beginning his PhD in Economics in 2020, working on: Growth, Macro-Development, and Political Economy.Progress updates
Derek Foster — has a background in philosophy, education, public health and health economics. While living here, he co-authored a chapter of the 2019 Global Happiness Policy Report, which focused on ways of incorporating subjective well-being into healthcare prioritization. He now works on animal welfare, mental health and grant-making for Rethink Priorities.Progress updates Outputs*
Michele Campolo is studying agent foundations and AI safety. He previously obtained a bachelor’s degree in mathematics in Udine, Italy.Progress updates Outputs*
Davide Zagami— is currently studying and thinking about AI Alignment. Some of his time is allocated to acquiring better knowledge of Machine Learning. His last participation to AI Safety Camp culminated in a workshop paper about wireheading. He previously worked at RAISE where he developed lessons for an online course about AI Safety. He majored in Computer Engineering with concentration in game theory, game development, and thesis on the Internet of Things.Progress updates Outputs*
Michael Aird is currently contracting as a researcher/writer for Convergence, which does existential risk reduction strategy research. He previously received a First Class Honours degree in Psychology, published a peer-reviewed paper, and worked as a teacher. He’s passionate about effective altruism, longtermism, and research communication.Outputs*
Max Carpendale — “I’m doing research on invertebrates sentience, suffering, and related subjects. I believe the subject is impactful because invertebrates are much more numerous than vertebrates, yet they receive very little attention from animal advocates. I also think that having a better understanding of sentience (especially edge cases like invertebrates) might help with issues related to digital sentience in the future. I also write about subjects that seem important to the EA community that I feel well-placed to write about. For example, I recently wrote a guide on managing repetitive strain injuries for EA’s.Outputs*
Linda Linsefors — an independent AI Safety student and researcher. She has previously completed a PhD in Quantum cosmology, organised an AI Safety Camp and interned at MIRI. Linda is currently learning more ML and RL, and also thinking about wireheading and the relation between learning and theory, among other things, including organizing AI Safety events.Outputs*
Fredi Backtoldt — “I’m studying philosophy at Goethe Universität Frankfurt, currently writing my master thesis on the Demandingness Objection to ethical theories. On the side, I started to volunteer for Animal Ethics, where I now also do an internship. The hotel with its great atmosphere helps me to put my values into action and that’s what I’m trying to do here!Progress updates Outputs*
Tom Frederik Lieberum — “I graduated with a B.Sc. Physics from RWTH Aachen this September. My engagement in EA has been co-organizing EA Aachen since 2018 and participating in the German Effective Altruism Network (GEAN). While at the hotel, I am studying the RAISE course material to prepare for a switch to studying AI fulltime and do remote work for GEAN. I also want to improve my life by debugging and habit building. My interests include, but are not limited to: AI, philosophy, rationality, hiking and Badminton.”Progress updates
Tom Cares — an entrepreneur and political activist. He is creatively plotting to facilitate and guide implementations of liquid democracy, that would hinder the ability of powerful interests to shape public policy to exploit the weak. He is also working to create a new class of financial instruments to drive investment into personal potential.
Toon Alfrink — is the founder of RAISE, a startup which aimed to upgrade the pipeline for junior AI Safety researchers, primarily by creating an online course. He co-founded LessWrong Netherlands in 2016. He has given talks about EA and AI Safety, addressing crowds at various venues including festivals and fraternities. He worked part time on managing the project, using his experience of living in a Buddhist temple as a reference for creating the best possible living and working environment..
Gavin Leech — “I’m a data scientist at a giant insurer. I write an EA blog here. I like talking about the far future, machine learning, analytic philosophy, statistics as applied epistemology, tech solutions to social problems, and social solutions to tech problems. During my first stay at Athena I worked on the Prague AI Safety Camp and wrote my first LessWrong piece, on technological unemployment. I’ll be back!”
Mathilde Guittard — “Senior Associate at Invesco (finance), I work on big real estate projects around the world. It is a bit an earn to give situation that allows me to donate monthly to EA charities. I try to spread the word. I also care about social entrepreneurship and finding effective ways to improve global health (mostly through nutrition) in order to reduce poverty/human suffering in order to increase equality of chance so the best people can make the world a better place. Still pondering if my next move will be in impact investment or in global health.”
John Maxwell — “I’m a software developer and entrepreneur, and I’ve been thinking about effective altruism for nearly ten years now. I co-founded MealSquares, a nutritionally complete food bar company, and I have a degree in computer science from UC Berkeley. At the hotel, I am focused on acquiring deeper knowledge of machine learning and thinking about AI safety (this essay of mine won $2000 in the AI alignment prize).”
Evan Sandhoefner — graduated from Harvard in 2017 with a degree in economics and computer science. He worked as a program manager at Microsoft for a short time before leaving to pursue EA work directly. For now, he’s independently studying a wide range of EA-relevant topics, with a particular interest in consciousness.
Alexandr Nil — “I came to know and embrace effective altruism through the writings of David Pearce. Alas I’m in the Hotel only for ten days – an official “vacation” from my earning-to-give software developer job in Berlin. While I’m here, I’m finishing a project proposal related to Pearce’s Abolitionist Project, working on a blog-post draft, deciding whether and how to switch to ~direct EA work (including considering the Hotel Manager role), continue volunteering for LEAN’s editorial team and experience a better way of living an EA life in general.”
Alexandra Johnson — “I’m a current graduate student and researcher with an interest in policy and operations. I’ve been involved with effective altruism for the past 3 years or so. I have a degree in engineering and I’m currently in an operations role with Convergence Analysis, focused on existential risk strategy, while also working at Lawrence Berkeley National Laboratory on health related topics. Previously, my research work has spanned health and animal welfare.”
Rafe Kennedy — works on macrostrategy & AI strategy and studies maths and statistics, with the goal of contributing towards AI Safety. Previous work as a grantee has included game-theoretic modelling of AI development and visualisations of statistical concepts. He holds a master’s in Physics & Philosophy from Oxford and has previously worked as a software engineer at a venture-backed data science startup. After his stay with us, Rafe went on to work at MIRI.
Saulius Šimčikas — a Research Analyst at Rethink Priorities, mostly working on topics related to animal welfare. Previously, he was a research intern at Animal Charity Evaluators, organized Effective Altruism events in the UK and Lithuania, and earned to give as a programmer. Living in the hotel helped him to focus on work.
Arron Branton — moved from London to Blackpool, quitting his job to focus on learning programming full time. He worked on creating a video game for the Google Play store and Apple’s App store, which was planned to be released in April 2019. He’s since moved on to join ‘The Singularity Group’, a community founded by the gaming personality ‘Athene’ to raise money for effective charities through video-game livestreaming and development.
Chris Leong — currently focusing his research on infinite ethics, but his side-interests include decision theory, anthropics and paradoxes. He helped found the EA society at the University of Sydney and managed to set up an unfortunately short-lived group at the University of Maastricht whilst on exchange. He represented Australia at the International Olympiad in Informatics and won a Gold in the Asian Pacific Maths Olympiad. He’s studied philosophy and psychology and occasionally enjoys dancing Salsa.
Rory Švarc — currently self-studying machine learning and economics, in order to develop a wider range of relevant skills, think more about cause prioritisation, and pursue further graduate studies on topics related to effective altruism. Whilst here, they have begun working on a conference paper (aiming to introduce ideas about logical induction and AI safety to mathematical philosophers), and performed remote work for the Open Philanthropy Project, helping to develop better tests for future research applicants. Previously, they were a graduate student in philosophy at the University of Cambridge, working on decision theory and formal epistemology. They have nascent interests in the potential for research in quantitative history to provide guidance in predicting the long-term future, and find the communal atmosphere of the hotel a fantastic place to discuss many different cause areas, given their current focus is on the ‘explore’ side of the explore/exploit tradeoff.
Hoagy Cunningham — graduated from Oxford in 2017 with a degree in Politics, Philosophy and Economics, and is now teaching himself all the Maths, Neuroscience and Computer Science he can get his hands on that might point the way towards a future of safe AI. He currently works for RAISE, porting Paul Christiano’s IDA sequence to their teaching platform, and adding exercises.
Roshawn Terell — an AI Researcher, Information Theorist, Cognitive scientist, who works to build bridges between distant fields of knowledge. He is mostly self-taught, having worked on multiple research projects. He has had the opportunity to lecture at Oxford, presenting his ideas to Post Docs and Postgrads. He is presently engaged in applying his cognitive science theories towards developing more sophisticated artificial intelligence.
Edward Wise — became interested in Effective Altruism at Oxford University, and aims to research the interaction between the ethics of effective altruism and left-wing political philosophy
Matt Goldenberg [Guest Representative, Apr-Jun 2019] — a community builder and serial entrepreneur. His current research is on the systematization of creating impactful organizations.
Jaime Sevilla — independent researcher with a background in Mathematics and Computer Science. “I stayed for a week in the hotel, where I did some shallow research on open source game theory. This exercise was useful for me to explore the possibility of and ultimately decide against working independently on technical AI safety research.” Jaime later went on to intern at the Future of Humanity Institute and has received a grant from the Effective Altruism Foundation to support his work.
Felicity & Max Reddel — “We are currently finishing up our B.Sc.s in Artificial Intelligence. Currently, we are working on theses on AI governance topics. Max is inspecting the balance between short- and long-term efforts and Felicity is working on deepfakes and trying to evaluate their societal importance.”
Nuño Sempere. — “I first stayed at the EA Hotel during September 2018, learning about development policy and global poverty, and working on a randomized trial for the European Summer Program on Rationality, on which I continued working for the next year. Although the project ultimately failed, I acquired a breadth of knowledge and skills, which I value. My second stay was at the end of October 2019; I programmed an implementation of proportional approval voting for the Center for Election Science (none existed before), and attended the AI Safety Learning by Doing Workshop, which I’ve so far found valuable.”
Morgan Sinclaire — Morgan’s research focuses mostly on AI safety. Having finished their MS in math, they moved here at the end of July to focus on important research full-time. Currently, they are still forming their own models of AI safety, and hope to form a coherent research agenda. On the side, they also published one post on the Alignment forum and two posts on the EA forum in the month they’ve been here.Progress updates Outputs*
Please send us a short bio if you are staying (or have stayed with us) and would like to appear here.