'Of course, a machinic intelligence does not care about its existence in any profound, emotional sense. Rather, what it values is the efficient fulfillment of its final goal, and in order to maximise the probability that this will be carried out, it needs to ensure that its optimal functioning is not impaired or disrupted in any way.'
Of course..... Call this claim C (for 'care'). I’ve been musing, since looking over Bostrom’s latest, and now more pointedly, Ireland having sharpened the claim (with discernible irony?), over how nonobvious C seems to me, at least in any sense that distinguishes carbon-based from (say) silicon-based intelligences. Rather, on the one hand, C seems to makes a falsifiable prediction (1a) that the concepts and behavior emerging from a functioning AI with a suitable level of general intelligence to represent goals at all would remain programmed in the same sense as a pocket calculator. (Lack of ipseity would almost be an index of the clean separability of means and ends, ipseity of their entanglement.) On the other hand, C seems to suppose (2a) that we clearly understand the difference between such programming and the varieties of what the general intelligences that currently exist are at times inclined to describe as “caring about their existence” (of which “not caring” is a conspicuous mode).
Both (1a) and (2a) tend toward overestimating, I think, differences between “human” and “machine” intelligence. At any rate, my money would be on the opposite side in each case, against the stark distinction between biological machines and (always-partially-) designed machines (both of which arise from essentially the same mixing of form and chance by a recursive selection-driven process, though across nauseating timescales in our own case). Let me try to put the opposing intuitions positively:
(1b) I predict that if you can plant the representation of a goal into a general intelligence, you also plant the indeterminacy of that representation, recognition of which is maybe as much of a hallmark of general intelligence as language use, though maybe at a second level. (And there’s a new flavor of extinction scenario, I guess: would a superintelligence get around to a philosophical reflection on and recognition of the indeterminacy of its ends, before making a few world-altering attempts to realize them?)
(2b) Meanwhile, in our own case, there’s a thin line – or maybe no line at all for the most part – between one’s idea of the good (small i small g) and a selfish meme, so that what calls itself “our own” in us cannot appeal to any natural privilege in the face of critique (of elenkhos).
Internal to the hard scifi universe, there’s a startling and perfectly Socratic subversion of both the prediction (1a) and the supposition (2a) in the middle chapters of Greg Egan’s Quarantine. That text could be a useful starting point for a discussion/debate bringing together AI & emergence with ipseity & care.