My main motivation behind writing this is to help you consider whether going to an Effective Altruism Global (EAG) conference this year is worth it. After having doubted the value, I was convinced otherwise at EAGx Oxford 2016. Therefore, I’m sharing my personal highlights from that weekend to attempt to demonstrate that these conferences are among the most valuable events *anyone* can attend because they have great content, a unique framework and exceptional attendees.
Let’s start with a quick overview of the topics from the official sessions I attended:
- High Impact Career Planning
- Probability and Statistics
- Applied Rationality
- Research Heuristics
- Logical Fallacies
- International Development
- Founding Organisations
- Big Data
- Catastrophic and Existential Risks
- On What Matters (III)
Most of the time, two other sessions ran simultaneously and a lot of high profile speakers didn’t make choosing any easier (click here for the detailed schedule). So within our Genevan team, we made sure we covered all relevant sessions and exchanged notes afterwards. I personally missed the Artificial Intelligence related sessions, so none of them will be part of this post but if that’s what you’re interested in, take a look at this post that I randomly found while absolutely not procrastinating.
Online, the EA community can sometimes seem less heartwarming than it really is (like me). However, once you make it to an in-person event it is hard to miss that the movement consists of many lovely humans trying their best to make sure we figure things out in time. Humans are the top highlight of any EA event. Everyone is warm (±37°C, ideally), open-minded, reasonable and curious. Conversations range from casual chatting to serious truth-seeking (the “you believe x, well I don’t and here’s why so now let’s sit down and try to figure out why we differ” kind of conversation. Only rarely does it follow ‘casual chatting’; the often awkward “Hi, I’m curious Konrad, what’s your name? What do you do? Where do you do that? Can we become friends?” kind of conversation) and everyone is super knowledgeable in the most different matters. Even better, you can ask anyone anything and they’ll be happy to help you out.
I used my free time to reconnect with friends, meet new people and process all the input. The general vibe is super easygoing - almost like you’re at a music festival in Portugal, but you’re not. You’re in one of the world’s academic capitals in chilly, rainy England. The speakers could often be spotted at other sessions, too, blending in with the crowd. Acting like mere muggles, you could even ask them mundane questions. Thus, if you want to see what this movement is all about, be inspired and gain motivation to do something: go meet its human subsets at an EAG conference and you will have a hard time not liking it (for those of you who will still have a hard time because you care more about humanity than about its individual subsets, I wrote the next section, I can understand you sometimes).
Top three sessions
Workshop appetisers from the Center for Applied Rationality (CFAR)
If you’re serious about ensuring our best possible future, CFAR is dedicated to turning you into the best goal-achiever ever (I tried to come up with something more self-explanatory but ‘scout’, ‘explorer’ or ‘self-improver’ just don’t cut it. Anyways, it’s applying logic and reason to make you become your best possible self to effectively save the world. Here, have a TED Talk). At the conference, they served three appetisers of their immersive 4-day curriculum compressed into short, one-hour sessions. They assumed that the crowd at the conference was advanced enough to deal with the implementation independently, hence, they mainly explained their reasoning behind each technique and the technique itself.
I had heard about CFAR and their work but I wasn’t aware of just how useful it would be. I can honestly say that I now think through how we go about doing things from discussing to making plans to implementing new habits more convincingly. On top of that, Duncan, the coach, had a lot of great analogies and remarks to make that made the sessions very enjoyable. I hope to attend their full course soon because there’s still far too much self-improvement to be done.
“That’s what you get if you’re running computers made of meat that wrote themselves.”
- Duncan, CFAR coach
To give you a more concrete idea, here the three techniques we learned, with explanatory links:
- Building Blocks of Behaviour Change: ‘Trigger-Action-Plans’
TAPs create an incremental transition by iterating a, basically zero-effort, three-step process that is designed to “summon your sapience” and let you rewire your brain.
- Navigating Intellectual Disagreement: ‘Double crux’
A technique to turn disagreements into a collaborative search for truth, or at least for you, to learn as much as possible from other worldviews and gather more data.
- Overcoming Planning Biases: ‘Murphy-Jitsu’
Murphy-Jitsu is designed to make us think about things we actually can anticipate but usually don’t when making future plans. The obvious often is non-obvious to.
"The universe is a dark maze and at some point, all of us run into a wall.
Because we had a belief of where to go."
Presentation: What do People think about Utilitarians?
This talk, by Molly Crockett on her research and the conclusions she had come to, was quite interesting to hear about because a large part of the EA community identifies as some kind of utilitarian. However, the word alone seems to divide crowds. Thus, I was eagerly hoping for a few insights on how one can avoid coming across as cold and heartless when presenting trade-offs and calculations that, even when based on global empathy, come off as inhuman(e).
Crockett’s lab found that utilitarians are generally seen as (i) less trustworthy; (ii) less empathic; and (iii) less likely to cooperate. She even claims that humans have developed a default morality - deontology. That might be to signal our value as a cooperator on the partnership market, rooted in the value of implicit social contracts that most of our societal fabric relies on. And utilitarian logic poses a direct threat to this fabric. Therefore, people who claim that it’s obvious/easy to sacrifice something - even if it is for the greater good - quickly alienate themselves from society because the rest of the group fears being used.
It follows that, if we really want to appeal to evidence and reason in our decision-making, we ought to appeal simultaneously to our ‘why’ - our values, our altruism. Without understanding that, it is understandably off-putting to listen to statistics and cost-benefit-analyses. For EA, that means that we need to emphasise the ‘A’ part of the movement more proactively, especially when talking about the ‘E’. Additionally, the movement could make more of an effort to support individual autonomy and diversity to build an unshakeable basis of trust to thrive and ensure it’s not being misunderstood.
Presentation: Heavy Tails & Power Laws
“Normal is not normal!” proclaimed the Future of Humanity Institute (FHI)’s Anders Sandberg. He started his fun talk by explaining why the ‘normal distribution’, or ‘bell curve’ should really only be called ‘Gaussian distribution’: except for well-known things, like the intelligence of humans and rolling dice, the Gaussian distribution and Central Limit Theorem can be very misleading (the CLT describes, in mathematical form, what we often call ‘regression to the mean’. E.g.: if you observe an exceptional event, the next event of that kind will be less exceptional, or in other words, closer to the mean). That is because we live in ‘Extremistan’ and figuring things out that we don’t already know requires a different mindset here.
In Extremistan, freak events (or more beautifully called ‘Black Swans’) occur. And when such events occur, they are far more intense than usual events, often triggering further extreme events. This is due to the complex (inter-)dependencies and correlations in our world. Therefore, a ‘real’ normal distribution might have the centre of a bell curve, but the tails are nowhere close. There are a lot of cascade effects in Extremistan with its fractal-geometrical nature, so expecting most values to lie within two or three standard deviations of the mean is a dangerous assumption we tend to make intuitively.
Anders’ talk illustrated how dangerous oversimplifications are, and how unaware we are of so-called ‘Dragon Kings’. Or, to say it less beautifully: how unaware we are of maxima that are caused by non-linear dynamics in complex systems, creating statistical outliers that throne above anything we’d seen before. Yet, there is hope: studying these dynamics in detail might allow us to see more Black Swans as Dragon Kings - events that we could predict with more complex models. Instead of saying “oh, that was unlikely” we ought to say “oh, the model was wrong” an awful lot more often.
This is why the EA movement is trying to figure out how to get into those heavy tails - “thinking meta matters!” Figuring out where tails actually cut off and finding the dragon kings could help us prepare for extreme events. No matter how probable such events are, with Dragon Kings you’re better safe than sorry. Anders is trying to do exactly that at the FHI; drawing nature’s lottery tickets and stacking the deck wherever possible. In addition to that, he’s a terrific speaker. Here are the slides to this talk. He also gave a presentation on Human Enhancement that I regret not having been to because these slides alone are already incredibly interesting.
Many different people were talking about policy work as a potential top priority and made me convinced that the movement keeps updating in the right direction. The same goes for designing and giving presentations. At previous events, I was always a little baffled at how bad the slides and how unprepared some speakers were, but I saw only one presentation for which I could say that this time. Along with being more professional, the general organisation and management was done extremely well and even the vegan food choices were quite nice.
Other than that, I was astonished at how much more low-hanging fruit (things that need little effort to achieve, yet often have considerable payoffs) there seems to be in fighting extreme poverty. Two talks on international development outlined how much better we could use (big) data if only it was all publicly available. How much that alone would contribute to ending poverty? $3 trillion/year in value, claims Alena Stern from AidData, who also emphasised that development aid wasn’t scientific at all before the nineties. Additionally, if programs weren’t divided along country borders but focused on only the poorest regions, we could do a lot more for those who are the worst off.
Further, cheaper than creating randomised controlled trials, we could just analyse geospatial data and satellite imagery to set up quasi-experiments, all the way back to the eighties (geospatial data means that data is geographically tagged, allowing interactive maps and better in-depth analysis. Combined with satellite imagery we can set up evaluations of interventions that happened in the past allowing us to analyse their effectiveness in hindsight). However, even when such data is available, data illiteracy, lack of trust and education are still significant hindrances in the relevant areas. It seems we’re still at the beginning of the data revolution, after all.
The next paragraph is a combination of different talks with implications for the movement’s general strategy. Taken from and inspired by:
- (i) Amanda Askell’s “Look, Leap or Retreat”
- (ii) Owen Cotton-Barratt’s “Prospecting For Gold”
- (iii) Stefan Schubert’s “The Younger Sibling Fallacy”
(i) Often, looking and doing research is worth a lot more than we’d expect intuitively, especially when the expected value is unclear, because then additional data points will allow us to be significantly more precise about the possible positive outcomes of an action; (ii) if we want to motivate others to join in the (re-)search, we should strategically plan on a group level to make the most out of each person’s individual comparative advantage and (ii) we need to avoid shortening our message, as not being understood means risking costly deviations. (iii) As we have the tendency to see others as less proactive than ourselves, we often dismiss their capacities and overlook the (potential for) cascade effects our actions have; (ii) which means that we should always try to move furthest into the very end of the heavy tails.
One, last thing that was emphasised multiple times, was the value of asking. Most of us still don’t do it enough. If one doesn’t understand something, if we don’t know where to start, if we want people to support us - there’s one simple trick: just ask. We tend to feel like there is some social cost to asking but simply asking can provide extremely valuable support and only has downsides if we do it too often. So far, most of us don’t do it enough.
Before going, I didn’t expect much from the conference beyond socialising and a lot of fuzzies caused by present humans. Most of the sessions didn’t sound like they were going to provide much beyond what I had been reading about every day for the past two years. But then, the pre-conference workshops alone blew my low expectations out of the water before anything had officially started. Only then came the lovely humans and more mind-broadening sessions. And more lovely humans (whom you can ask anything without hesitation). At the very worst, some talks were just a little too superficial, but nothing was bad. My sole suggestion for possible improvements would be to introduce different “difficulty” tracks, to allow newbies and savants to enjoy different sessions, instead of dividing tracks along topics. That seems complicated to implement though.
So, seriously, the ‘mind-broadenings per minute’ and the ‘density of lovely people per m2’ seem to reach the global optimum at EAG. Go, sign up, be one of these lovely people this year. There’s also a lot of financial support available if that’s what’s keeping you from it.