jump to navigation

More morality system thoughts April 4, 2013

Posted by ficial in brain dump.
Tags: , ,
trackback

I’ve been thinking a bit more about the kinds of choices that an agent can make: solipsistic, tolerant, and empathetic. The tolerant and empathetic ones are the more interesting ones (to me, anyway) because that’s where we begin to get into the idea of morality in the common definition – i.e. how what something does impacts other things, and how that affects the choices one makes. That’s also where the complexity of the system grows dramatically. Those two realms are different from the solipsistic one in that they require taking into account the preferences of other entities.

There are two main branches an agent can take when making a tolerance or empathy evaluation. The first is a projection based approach, where the agent makes the evaluation using its own belief system but from the perspective of the other entity (i.e. the context is identical except that the agent is in the position of the other entity and the agent role in that context is filled by an imaginary independent copy of the actual agent). The second is a model based approach. In this case the agent attempts to use the other entity’s own belief system to make the evaluation for that entity’s position. Since (at least in the real world) it’s basically impossible to use (or even have) a perfect model of the other entity’s belief system the agent must use a simplified model. Depending on the degree of simplification this could potentially make the modeled approach less complex (in terms of storage and processing requirements) than the projected one. Agents can certainly use a mix of the two approaches, and part of the evaluation function is about deciding which approach to use for any given entity.

One can consider projection to be a subset of modeling. Any given model will have variations from / limitation relative to the entity on which it is based. In a projection based approach the validity of the model is directly dependent on the homogeneity of the entity population – if the agent and all entities are quite similar, the projection will be a good model, but if the agent and entities are all different then it will be bad model.

Regardless of the approach, any evaluation function that itself takes into account other entities potentially becomes mired down in recursion (“I know that you know that I know that you know that I can’t drink from the cup nearest me…”). There are three main strategies to resolve this:

  1. explicitly limit the depth of recursion
  2. require strictly simpler models (i.e. the model used to do the evaluation in a function must be strictly simpler than the model which caused the function to be used/called), which gives an implicit base case (NOTE: but not necessarily a limited depth unless we also specify a minimum complexity delta)
  3. do something complicated with a differential-equation-like approach

I don’t really know where to go with all that other than to note that it’s a difficult problem.

Over all, an entity takes more account of other entities as it shifts along the solipsistic->empathetic axis, and as it shifts along the projection->model axis.

There’s a significant increase in the complexity of making a choice in the shift from solipsistic to tolerant in that additional entities have to be considered. There’s an additional increase in complexity in the shift towards empathy as more outcomes have to be considered for each other entity. Once the entity is in at least the tolerant range then the complexity also increase based on how other entities are accounted for, and the complexity path probably looks something like: simple model -> projection -> complex model.

There’s another level of complexity that hasn’t yet been covered, and that’s determining what counts as an entity. The more things in the world state that count as an entity the more complicated the processing becomes. That’s probably another function in an entity’s belief system:

DEF: “entity recognition”: a function that returns a number from 0 to 1 which indicates the ‘entity-ness’ of the trait-value in question – this is used in weighting the entity’s results when doing tolerance and empathy calculations
DEF: “strict entity recognition”,”loose entity recognition”: relative terms indicating the entity recognition function returns higher values for fewer or more (respectively) trait values.

———–

For an entity to have some kind of morality in the natural-language sense there are a number of characteristics that are required for their belief system, AND for the world

  1. the entity must have at least 1 belief (evaluative or imperative) – an entity cannot be moral if it has no way of placing a value on what it does and/or the state of the world
  2. there must exist a capability where the entity is the agent and the world state is morally relevant to that entity – an entity cannot be moral if there is no way for its beliefs to be impact the world
  3. the entity must have a non-zero tolerance – this point is arguable, but for now at least I’d say that an entity that does not account for anything other than itself is cannot be moral in the natural language sense
  4. the entity’s recognition function must identify at least one other entity in the world state which is in the capability for which the first entity is the agent – same as above, an entity cannot be moral without considering at least one other entity

Overall, the factors that limit an entity’s ability to be moral are:

  • the tolerance (and empathy) of the belief system
  • the fidelity of the model of the world (i.e. the world-state representation used in various functions); this may be subdivided into:
    • selection: what is included in the model vs how the world actually is (NOTE: selection is subtly different from abstraction, though the boundary between the two is fuzzy; abstraction is ‘ignoring the parts that are irrelevant [to the current work/problem], selection is ‘choosing which parts to include regardless of relevance’; the blurriness arises from the designation of what exactly ‘irrelevant’ is)
    • accuracy: how closely the things that are in the model match/represent reality; how good a predictor the model is of reality
  • the ability to recognize other entities (i.e. the looseness/strictness of entity recognition)
  • the fidelity of models of the belief system of other entities (potentially including accounting for the fidelity of their world-state models as well)
  • the level of complexity / amount of work the processing system can handle
  • the power/opportunity to take actions / make choices

Each of these factors may be rated low to high. An entity’s ability to be moral is limited by the lowest rating of the above factors.

The fidelity of the world model is in turn limited by the ability of the entity to perceive the world. Similarly, the fidelity of models of beliefs of other entities is limited by the ability to perceive the beliefs of other entities – this comes down to observation and communication.

———–

DEF: ‘moral conflict’: Two (or more) entities have a moral conflict if they would choose different actions given capabilities that are the same except for the agent. .The degree of conflict is the sum of the differences between the ratings the two entities gave for each action; e.g. entity A and B choose actions a and b respectively, the degree of conflict is (Aa-Ba) + (Bb-Ab).

Moral conflict may arise from the core beliefs of the entities (i.e. what is the right and wrong kind of action to take, and what makes a world state better or worse). However, even with identical imperatives and evaluatives, conflicts can arise due to variations in the above limiting factors. E.g. given two entities that believe equally that it is important to give money to other entities, the choice of what to do with a pile of money depends then on what else in the world is recognized as an entity.

———–

I think my next exploration of this topic will be to try putting together some actual morality structures – outlining a data structure for a morality system and trying to use that to model one or a couple of kinds of entities.

Advertisements

Comments»

No comments yet — be the first.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: