IntroIs there any economic advantage in altruism compared to selfish foraging? A recent agent-based simulation by Klaus Jaffe, suggests that it has little observable benefit to society as a whole : (http://jasss.soc.surrey.ac.uk/5/3/3.html) So there is a real question here. ResearchIn Question1 I compare two economies : * SelfishForagers who have no interactions with each other * GiftAgents who donate their surplus to a needy neighbour. Result
The result seems to show that the gift economy keeps twice as many agents alive as the selfish foraging economy. And has produced just under twice as much stuff. So why a result so different from Jaffe's? From a cursory glance at his paper, my guess is that it's because my simulation has two types of resource, but his only has one. What my simulation shows is division of labour, where specialists who are good at foraging with one resource, but inadequate in another, are able to collaborate with neighbours with the opposite skills. NB : This experiment used two resources : * food, 10, 3, 3 * drink, 10, 3, 3 I tested this idea by running the experiment again, this time with only one resource : food. (Question2)
As you can see, it doesn't corroberate that well. The two economies are closer, suggesting that division of labour had some effect on the success of the gift economy. But wasn't the whole story. Another interesting observation, the survival rate of the selfish foragers has gone up. This is because with fewer resources to run out of the chance of an agent dying is lower. Why we get such a different result from Jaffe therefore still needs more thought. Why the difference?Well, I think I found some differences on the models used: # Jaffe's model includes resource use (to keep alive an agent consumes a certain amount of resources). If I understood correctly in OPTIMAES an agent dies if it has not enough resources, but doesn't consumes them. # In OPTIMAES an agent gives its surplus to its most needed neighbour. In Jaffe if 2 agents are close enough, the wealth difference is greater than a certain value then a part of the wealth difference is transfered from one to the other. # In Jaffe resources are limited and found. I don't quite understand how resources are created in OPTIMAES, but Jaffe seems to be a zero sum game (with no accounting for the concept of development). # Jaffe limits the kinds of altruism tested (see 3.6). This means that OPTIMAES is getting transactions that Jaffe's model is not accounting for. # There are more differences (see the methods part). -- JoaoNeves Good points Joao. # An OPTIMAES agent does consume resources every time-step. So it is possible that an agent can run out of a resource and die. But I don't think that's your point. It sounds as if you read Jaffe's model as having a more intimate connection between consumption and dying. For example, the agent tries to consume a resource, and in failing, dies. In OPTIMAES, the agent consumes what resources it has, and later in the time-step, a test is made to see if the resource has fallen below the need. If it has, the agent dies. Maybe there's a subtle difference here. Which model is more realistic? Are there any problems with the OPTIMAES way of doing things? # There's certainly a difference that Jaffe's model uses agents roaming in a space while we use an abstract model of a social network. What does that mean? That, in principle, any of his agents might interact. OTOH ours are constrained to interact with their neighbourhood. But we do have some flexibility. We can compare a fully connected social network. And different sizes of local network to see how much effect this has. Something to add to the todo list. # Yep. That's it, isn't it? If his is a zero-sum game, then why would you expect altruism to increase the sum? I think this is probably the answer to the difference. In OPTIMAES, we assume that every agent creates resources (perhaps the word resources is misleading here) # I'll go and read the paper again and try to understand the limits. Thanks again -- PhilJones CriticismsRandom ThoughtsPretty sure simple Giver model is equivalent to a Robin Hood Thief sort-of-model. Running it with freeloaders might be interesting. -- AdamBurke : Yep, just like most people, my common-sense intuition is that if we let agents decide whether to work or not, give them an incentive not to, and hope gift giving will carry the free-loaders, it won't be succesful. But let's implement and test. The reason for doing that is we can then see what certain kinds of modification can do. For example, I think a GiftEconomy? with some kind of reputation system, that loosely accounts whether people are "pulling their weight" may well be sustainable. Also, we'll have to make explicit the kind of motivation we're giving agents to freeload. Will we just make agents who refuse to refuse to work altogether? Will we make agents who look around their neighbourhood, see if there's sufficient surplus and then decide not to work? Will we make agents who stop working after they've received a life supporting donation? : It's also important to note that there may be a similar disincentive to produce that can occur in ExchangeEconomies?. Traders may choose to buy and sell things rather than make them. Conventionally, we presume that this is still useful in that it helps move stuff around. But it may be that idle, but generous free-loaders who pass on handouts to other free-loaders, are equally succesful at this. : Anyway, this is speculation. Important thing is to get testing. -- PhilJones IMO people are selfish under two conditions. The first is when it's perceived as necessary for survival. The second is when it will not unduly harm others. If I had time to contribute to this project, I'd model this. Create an agent which, when presented with a do-or-die situation, acts selfishly; and when presented with a situation in which its survival is not at stake but where acting selfishly would (could) cause another agent's death, acts altruistically (unselfish). -Thomas I'd guess at a third condition under which people are selfish. When there's no reason for them not to be. Much work has been done to argue that when "altruism" is societally monitored, it tends to manifest itself more often. There's the example from Matt Ridley's book, The Origins of Virtue, that shows that vampire bats will share food with one another if a-there is need and b-the bat in need has not refused to share in the past. Could agents learn over time to not share with those who arent predisposed to sharing? Could those who are normally selfish learn to share with others after being refused repeatedly? I've never run this type of simulation(although this is a great opportunity to learn, thanks for doing this) but these are my two cents for now. I'm eager to follow the progress here. -robert |
Can't locate object method "startform" via package "CGI" at optimaes.cgi line 1479.
For help, please send mail to the webmaster ([no address given]), giving this error message and the time and date of the error.