Chaos Theory Test Site

This is my linkable blog. Here lie assorted ideas, rants and ramblings that I can't seem not to write.

My Photo
Name:
Location: Victoria, Australia

This blog is a result of my wanting to share and exchange ideas with others, without cluttering up their blogs with my lengthy replies or necessarily having to exchange email details. Probably I'm nowhere near as angsty as I sound in some of my posts here. I promise I'm really pretty mellow. Honest.

Thursday, January 29, 2009

Levels of imperfection in rational agents

(Caveat: This was hastily tappitty-tapped on a very, very hot day and may be subject to editing for clarity/coherence/sanity at some point.)

A rational agent "is an agent which takes actions based on information from and knowledge about the agent's environment. It strives to maximize the chances of success, where success is defined as the achievement of some desired outcome"

The use of this agent in modelling human behaviour perplexes me, as it seems potentially corruptible, and subject to the imperfections of its human programmer. By programming a rational agent to behave in a given way, almost any outcome can be modelled for a given scenario. At the same time, the behaviour of identically programmed rational agents will be identical under identical circumstances, where humans, with their far more complex and non-identical "programming" will not. The only way to make a model which might be able to predict human behaviour is to make its programming as complex and diverse as that of actual humans.

A term I have seen used is "perfectly rational agent" where the agent is presumed to have all required information and the time required to process it, as opposed to a "bounded rational agent" which will have limited information, limited processing ability/time or other constraints placed on it to more closely model behaviour of imperfect agents like humans.

I see humans and animals to be bounded rational agents. The intellectual capabilities required to do more than interpret input and react according to simple programming is beyond most animals because the cost of maintaining a brain large and powerful enough to do more is greater than the benefit. Of course, there is a wide range of intellectual ability among living things, which could be viewed as being dependent of how constrained they are by these "bounds" within which they must function.

Humans, who are less bounded, are intelligent enough to evaluate, reason and consciously decide rather than simply react. Looking up and down the scale, we can see that more processing ability plus more information collecting ability equals higher intelligence. Now humans, by becoming clever enough to invent the question; "Why?", have left me wondering; why are humans not more rational? Why are humans, with their big, clever brains, not closer to being perfectly rational beings?

I suspect that human behaviour is less able to be modelled using rational agents than that of less intelligent animals. I'd have expected humans, being at the "more information plus more processing ability" end of a spectrum would be more easily modelled by the hypothetical "perfectly rational agent", and have to wonder why evolution made such a weird corkscrew turn with a half-pike out into the realms of irrationality.

I describe rational agents as being "programmed". Humans are smart enough to be able to not only recognise their programming, but to question the overall purpose of their programming. Some humans want to find their programmer, take it apart and see what makes it tick. Some, for whatever reason, do not want anyone to ever 'look behind the curtain'.

Humans have penetrated the evolutionary "fourth wall". I mean, look at us; we've not only noticed how we evolved, we are taking apart and examining the stuff we are made of. It's not wonder we are interested in critically analysing our programming with regard to the big "Why?" questions. And the singularity that keeps drawing me in, as it has countless others in time past is; There is no Answer. No "42". No "God". No "Ultimate Meaning". No Point. (I have my own workaround for this, but it seems that many people find this idea unacceptable).

Recognition of the ultimate futility of life has a bad effect on morale, especially under adverse circumstances, and therein lies the detriment in in being more perfectly rational. Being more perfectly rational makes it more difficult for an individual to miss the fact that life is futile. Being more perfectly rational makes it hard to ignore the point beyond which continuing to hope and strive is just an exercise in futility. Humans have a weird trick they use to get around that; they've evolved in such a way that they keep the advantages of being "clever" without incurring the "futility" penalty by impairing the "rational" part of the combination.

Animals strive by reflex alone. They don't need a higher purpose to continue to fight against hopeless circumstances. They just do. The nature of their boundedness assists them to live in a way that works ...for a sustainable number of their species.

Humans who have a purpose feel better, work harder and are more inclined to get along with others who share a common (non-competitive) purpose. It does not matter much what the purpose is, as long as it is relatively stable, (un-disprovable is ideal) and not more detrimental than beneficial to those who hold it.

Beyond the fourth wall, there are likely to lie a whole bunch of interesting eventualities. One I have noticed is the ability to recognise that in the absence of a programmer/God/creator figure, the "purpose of life" is not necessarily graven in stone.

People can choose what they want to take on as their individual purpose, and individuals can make up our own answer to the question of "Why?"

1 Comments:

Blogger Paul Harrison said...

Nice.

History: In the beginning were the perfectly rational agents, and indeed, needing no qualification, they were simply known as rational agents. Realizing rational agents were not an accurate model of reality, people have attempted to create more complicated models, such as bounded rationality. With these new models floating around, the qualifier "perfect" became required to distinguish the original model.

Anyway, life for a perfectly rational agent is not futile so long as the desired outcome might be achieved. The theory of rational agents does not specify what this outcome is -- it can be applied to an agent seeking any outcome.

Yes, such an agent might peer beyond the 4th wall, but only to further its pursuit of its desired outcome, and it need not be upset by what it discovers. If humans are upset by what they discover, well, maybe this is not a good model.

4:45 pm  

Post a Comment

<< Home