Good post but we shouldn’t assume the “funnel” distribution to be symmetric about the line of 0 utility. We can expect that unlikely outcomes are good in expectation just as we expect that likely outcomes are good in expectation. Your last two images show actions which have an immediate expected utility of 0. But if we are talking about an action with generally good effects, we can expect the funnel (or bullet) to start at a positive number. We also might expect it to follow an upward-sloping line, rather than equally diverging to positive and negative outcomes. In other words, bed nets are more likely to please interdimensional travelers than they are to displease them, and so on.

Also, the distribution of outcomes at any level of probability should follow a roughly Gaussian distribution. Most bizarre, contorted possibilities lead to outcomes that are neither unusually good nor unusually bad. This means it’s not clear that the utility is undefined; as you keep you looking to sets of unlikelier outcomes you are getting a series of tightly finite expectations rather than big broad ones that might easily turn out to be hugely positive or negative based on minor factors. Your images of the funnel and bullet should show much more density along the middle, with less density at the top and bottom. We still get an infinite series, so there is that philosophical problem for people who want a rigorous idea of utilitarianism, but it’s not a big problem for practical decision making because it’s easy to talk about some interventions being better than others.

I’m not assuming it’s symmetric. It probably isn’t symmetric, in fact. Nevertheless, it’s still true that the expected utility of every action is undefined, and that if we consider increasingly large sets of possible outcomes, the partial sums will oscillate wildly the more we consider.

Yes, at any level of probability there should be a higher density of outcomes towards the center. That doesn’t change the result, as far as I can tell. Imagine you are adding new possible outcomes to consideration, one by one. Most of the outcomes you add won’t change the EV much. But occasionally you’ll hit one that makes everything that came before look like a rounding error, and it might flip the sign of the EV. And this occasional occurrence will never cease; it’ll always be true that if you keep considering more possibilities, the old possibilities will continue to be dwarfed and the sign will continue to flip. You can never rest easy and say “This is good enough;” there will always be more crucial considerations to uncover.

So this is a problem in theory—it means we are approximating an ideal which is both stupid and incoherent—but is it a problem in practice?

Well, I’m going to argue in later posts in this series that it isn’t. My argument is basically that there are a bunch of reasonably plausible ways to solve this theoretical problem without undermining long-termism.

That said, I don’t think we should dismiss this problem lightly. One thing that troubles me is how superficially similar the failure mode I describe here is to the actual history of the EA movement: People say “Hey, let’s actually do some expected value calculations” and they start off by finding better global poverty interventions, then they start doing this stuff with animals, then they start talking about the end of the world, then they start talking about evil robots… and some of them talk about simulations and alternate universes...

Arguably this behavior is the predictable result of considering more and more possibilities in your EV calculations, and it doesn’t represent progress in any meaningful sense—it just means that EAs have gone farther down the funnel-shaped rabbithole than everybody else. If we hang on long enough, we’ll end up doing crazier and crazier things until we are diverting all our funds from x-risk prevention and betting it on some wild scheme to hack into an alternate dimension and create uncountably infinite hedonium.

>Imagine you are adding new possible outcomes to consideration, one by one. Most of the outcomes you add won’t change the EV much. But occasionally you’ll hit one that makes everything that came before look like a rounding error, and it might flip the sign of the EV.

But the probability of those rare things will be super low. It’s not obvious that they’ll change the EV as much as nearer term impacts.

This would benefit from an exercise in modeling the utilities and probabilities of a certain intervention to see what the distribution actually looks like. So far no one has bothered (or needed, perhaps) to actually enumerate the 2nd, 3rd, etc… order effects and estimate their probabilities. All this theorizing might be unnecessary if our actual expectations follow a different pattern.

>So this is a problem in theory—it means we are approximating an ideal which is both stupid and incoherent.

Are we? Expected utility is still a thing. Some actions have greater expected utility than others even if the probability distribution has huge mass across both positive and negative possibilities. If infinite utility is a problem then it’s already a problem regardless of any funnel or oscillating type distribution of outcomes.

>Arguably this behavior is the predictable result of considering more and more possibilities in your EV calculations, and it doesn’t represent progress in any meaningful sense—it just means that EAs have gone farther down the funnel-shaped rabbithole than everybody else.

Another way of describing this phenomenon is that we are simply seizing the low hanging fruit, and hard intellectual progress isn’t even needed.

But the probability of those rare things will be super low. It’s not obvious that they’ll change the EV as much as nearer term impacts. … All this theorizing might be unnecessary if our actual expectations follow a different pattern.

Yes, if the profiles are not funnel-shaped then this whole thing is moot. I argue that they are funnel-shaped, at least for many utility functions currently in use (e.g. utility functions that are linear in QALYs) I’m afraid my argument isn’t up yet—it’s in the appendix, sorry—but it will be up in a few days!

Are we? Expected utility is still a thing. Some actions have greater expected utility than others even if the probability distribution has huge mass across both positive and negative possibilities. If infinite utility is a problem then it’s already a problem regardless of any funnel or oscillating type distribution of outcomes.

If the profiles are funnel-shaped, expected utility is not a thing. The shape of your action profiles depends on your probability function and your utility function. Yes, infinitely valuable outcomes are a problem—but I’m arguing that even if you ignore infinitely valuable outcomes, there’s still a big problem having to do with infinitely many possible finite outcomes, and moreover even if you only consider finitely many outcomes of finite value, if the profiles are funnel-shaped then what you end up doing will be highly arbitrary, determined mostly by whatever is happening at the place where you happened to draw the cutoff.

Another way of describing this phenomenon is that we are simply seizing the low hanging fruit, and hard intellectual progress isn’t even needed.

That’s what I’d like to think, and that’s what I do think. But this argument challenges that; this argument says that the low-hanging fruit metaphor is inappropriate here: there is no lowest-hanging fruit or anything close; there is an infinite series of fruit hanging lower and lower, such that for any fruit you pick, if only you had thought about it a little longer you would have found an even lower-hanging fruit that would have been so much easier to pick that it would easily justify the cost in extra thinking time needed to identify it… moreover, you never really “pick” these fruit, in that the fruit are gambles, not outcomes; they aren’t actually what you want, they are just tickets that have some chance of getting what you want. And the lower the fruit, the lower the chance...

>The shape of your action profiles depends on your probability function

Are you saying that there is no expected utility just because people have different expectations?

>and your utility function

Well, of course. That doesn’t mean there is no expected utility! It’s just different for different agents.

>I’m arguing that even if you ignore infinitely valuable outcomes, there’s still a big problem having to do with infinitely many possible finite outcomes,

That in itself is not a problem, imagine a uniform distribution from 0 to 1.

>if the profiles are funnel-shaped then what you end up doing will be highly arbitrary, determined mostly by whatever is happening at the place where you happened to draw the cutoff.

If you do something arbitrary like drawing a cutoff, then of course how you do it will have arbitrary results. I think the lesson here is not to draw cutoffs in the first place.

>That’s what I’d like to think, and that’s what I do think. But this argument challenges that; this argument says that the low-hanging fruit metaphor is inappropriate here: there is no lowest-hanging fruit or anything close; there is an infinite series of fruit hanging lower and lower, such that for any fruit you pick, if only you had thought about it a little longer you would have found an even lower-hanging fruit that would have been so much easier to pick that it would easily justify the cost in extra thinking time needed to identify it… moreover, you never really “pick” these fruit, in that the fruit are gambles, not outcomes; they aren’t actually what you want, they are just tickets that have some chance of getting what you want. And the lower the fruit, the lower the chance...

There must be a lowest hanging fruit out of any finite set of possible actions, as long as “better intervention than” follows basic decision theoretic properties which come automatically if they have expected utility values.

Also, remember the conservation of expected evidence. When we think about the long run effects of a given intervention, we are updating our prior to go either up or down, not predictably making it seem more attractive.

Good post but we shouldn’t assume the “funnel” distribution to be symmetric about the line of 0 utility. We can expect that unlikely outcomes are good in expectation just as we expect that likely outcomes are good in expectation. Your last two images show actions which have an immediate expected utility of 0. But if we are talking about an action with generally good effects, we can expect the funnel (or bullet) to start at a positive number. We also might expect it to follow an upward-sloping line, rather than equally diverging to positive and negative outcomes. In other words, bed nets are more likely to please interdimensional travelers than they are to displease them, and so on.

Also, the distribution of outcomes at any level of probability should follow a roughly Gaussian distribution. Most bizarre, contorted possibilities lead to outcomes that are neither unusually good nor unusually bad. This means it’s not clear that the utility is undefined; as you keep you looking to sets of unlikelier outcomes you are getting a series of tightly finite expectations rather than big broad ones that might easily turn out to be hugely positive or negative based on minor factors. Your images of the funnel and bullet should show much more density along the middle, with less density at the top and bottom. We still get an infinite series, so there is that philosophical problem for people who want a rigorous idea of utilitarianism, but it’s not a big problem for practical decision making because it’s easy to talk about some interventions being better than others.

I’m not assuming it’s symmetric. It probably isn’t symmetric, in fact. Nevertheless, it’s still true that the expected utility of every action is undefined, and that if we consider increasingly large sets of possible outcomes, the partial sums will oscillate wildly the more we consider.

Yes, at any level of probability there should be a higher density of outcomes towards the center. That doesn’t change the result, as far as I can tell. Imagine you are adding new possible outcomes to consideration, one by one. Most of the outcomes you add won’t change the EV much. But occasionally you’ll hit one that makes everything that came before look like a rounding error, and it might flip the sign of the EV. And this occasional occurrence will never cease; it’ll always be true that if you keep considering more possibilities, the old possibilities will continue to be dwarfed and the sign will continue to flip. You can never rest easy and say “This is good enough;” there will always be more crucial considerations to uncover.

So this is a problem in theory—it means we are approximating an ideal which is both stupid and incoherent—but is it a problem in practice?

Well, I’m going to argue in later posts in this series that it isn’t. My argument is basically that there are a bunch of reasonably plausible ways to solve this theoretical problem without undermining long-termism.

That said, I don’t think we should dismiss this problem lightly. One thing that troubles me is how superficially similar the failure mode I describe here is to the actual history of the EA movement: People say “Hey, let’s actually do some expected value calculations” and they start off by finding better global poverty interventions, then they start doing this stuff with animals, then they start talking about the end of the world, then they start talking about evil robots… and some of them talk about simulations and alternate universes...

Arguably this behavior is the predictable result of considering more and more possibilities in your EV calculations, and it doesn’t represent progress in any meaningful sense—it just means that EAs have gone farther down the funnel-shaped rabbithole than everybody else. If we hang on long enough, we’ll end up doing crazier and crazier things until we are diverting all our funds from x-risk prevention and betting it on some wild scheme to hack into an alternate dimension and create uncountably infinite hedonium.

>Imagine you are adding new possible outcomes to consideration, one by one. Most of the outcomes you add won’t change the EV much. But occasionally you’ll hit one that makes everything that came before look like a rounding error, and it might flip the sign of the EV.

But the probability of those rare things will be super low. It’s not obvious that they’ll change the EV as much as nearer term impacts.

This would benefit from an exercise in modeling the utilities and probabilities of a certain intervention to see what the distribution actually looks like. So far no one has bothered (or needed, perhaps) to actually enumerate the 2nd, 3rd, etc… order effects and estimate their probabilities. All this theorizing might be unnecessary if our actual expectations follow a different pattern.

>So this is a problem in theory—it means we are approximating an ideal which is both stupid and incoherent.

Are we? Expected utility is still a thing. Some actions have greater expected utility than others even if the probability distribution has huge mass across both positive and negative possibilities. If infinite utility is a problem then it’s already a problem regardless of any funnel or oscillating type distribution of outcomes.

>Arguably this behavior is the predictable result of considering more and more possibilities in your EV calculations, and it doesn’t represent progress in any meaningful sense—it just means that EAs have gone farther down the funnel-shaped rabbithole than everybody else.

Another way of describing this phenomenon is that we are simply seizing the low hanging fruit, and hard intellectual progress isn’t even needed.

Yes, if the profiles are not funnel-shaped then this whole thing is moot. I argue that they are funnel-shaped, at least for many utility functions currently in use (e.g. utility functions that are linear in QALYs) I’m afraid my argument isn’t up yet—it’s in the appendix, sorry—but it will be up in a few days!

If the profiles are funnel-shaped, expected utility is not a thing. The shape of your action profiles depends on your probability function and your utility function. Yes, infinitely valuable outcomes are a problem—but I’m arguing that even if you ignore infinitely valuable outcomes, there’s still a big problem having to do with infinitely many possible finite outcomes, and moreover even if you only consider finitely many outcomes of finite value, if the profiles are funnel-shaped then what you end up doing will be highly arbitrary, determined mostly by whatever is happening at the place where you happened to draw the cutoff.

That’s what I’d like to think, and that’s what I do think. But this argument challenges that; this argument says that the low-hanging fruit metaphor is inappropriate here: there is no lowest-hanging fruit or anything close; there is an infinite series of fruit hanging lower and lower, such that for any fruit you pick, if only you had thought about it a little longer you would have found an even lower-hanging fruit that would have been so much easier to pick that it would easily justify the cost in extra thinking time needed to identify it… moreover, you never really “pick” these fruit, in that the fruit are gambles, not outcomes; they aren’t actually what you want, they are just tickets that have some chance of getting what you want. And the lower the fruit, the lower the chance...

>The shape of your action profiles depends on your probability function

Are you saying that there is no expected utility just because people have different expectations?

>and your utility function

Well, of course. That doesn’t mean there is no expected utility! It’s just different for different agents.

>I’m arguing that even if you ignore infinitely valuable outcomes, there’s still a big problem having to do with infinitely many possible finite outcomes,

That in itself is not a problem, imagine a uniform distribution from 0 to 1.

>if the profiles are funnel-shaped then what you end up doing will be highly arbitrary, determined mostly by whatever is happening at the place where you happened to draw the cutoff.

If you do something arbitrary like drawing a cutoff, then of course how you do it will have arbitrary results. I think the lesson here is not to draw cutoffs in the first place.

>That’s what I’d like to think, and that’s what I do think. But this argument challenges that; this argument says that the low-hanging fruit metaphor is inappropriate here: there is no lowest-hanging fruit or anything close; there is an infinite series of fruit hanging lower and lower, such that for any fruit you pick, if only you had thought about it a little longer you would have found an even lower-hanging fruit that would have been so much easier to pick that it would easily justify the cost in extra thinking time needed to identify it… moreover, you never really “pick” these fruit, in that the fruit are gambles, not outcomes; they aren’t actually what you want, they are just tickets that have some chance of getting what you want. And the lower the fruit, the lower the chance...

There must be a lowest hanging fruit out of any finite set of possible actions, as long as “better intervention than” follows basic decision theoretic properties which come automatically if they have expected utility values.

Also, remember the conservation of expected evidence. When we think about the long run effects of a given intervention, we are updating our prior to go either up or down, not predictably making it seem more attractive.