Once, as promised, bots start interacting with one another,|
understanding bot behavior may become impossible. Anyone who has
had to call a help line with a problem about the way an operating
system from one vendor and program from another are working together--
or failing to work--knows how hard it is to get anyone to take
responsibility for software interactions. Support staff rapidly
renounce all knowledge of (and usually interest in) problems that
arise from interactions because there are just too many possibilities.
So it's easy to imagine sophisticated programmers, let alone ordinary
users, being unable to unravel how even a small group of bots reached
a particular state autonomously. The challenge will be unfathomable
if, as one research group has it, we can "anticipate a scenario in
which billions of intelligent agents will roam the virtual world,
handling all levels of simple to complex negotiations and transactions.
If human agents are confused with digital ones, if human action is
taken as mere information processing, and if the social complexities of
negotiation, delegation, and representation are reduced to "when x, do y,"
bots will end up with autonomy without accountability. Their owners, by
contrast, may have accountability without control.
-- John Seely Brown and Paul Duguid, in "The Social Life of Information"