As an Amazon Associate we earn from qualifying purchases.

New Release: Children of Doro – M L Clark

Children of Doro - M L Clark

QSFer M L Clark has a new queer space opera out (gender fluid, non-binary): Children of Doro.

Have you ever made a ship’s AI proud? Really, truly proud?

Captain Alastri has.

She’s a child of Doro, a frontier world governed by a temperamental AI that represents the thoughts and feelings of all its citizens.

Never heard of it? Well, it did get destroyed, which is where her ship’s AI steps in, to regale us with how Alastri’s past led directly to this catastrophe.

When Alastri was 17, she witnessed a failed mediation between the ever-wronged citizen Ceres and Doro’s governing AI. That day didn’t just reveal a range of competing philosophies. It also led to treason, the loss of her ship, and the destruction of her home 25 years on.

Connecting the dots from that day is the only way Alastri can hope to prevent further disaster for her system. And yes, this she does, most splendidly—at least, if you can believe a ship’s ridiculously proud AI.

Get It At Amazon


Excerpt

(FROM PART 2)

Generally speaking, a planet will not explode.

Outside alTspeaK science-fantasies, a planetary explosion occurs when a star engulfs it, or when it suffers a massive collision with another astral body.

Yet Planet Doro did explode, into sixteen major fragments and an array of smaller debris that broke up and dispersed its seven satellites in turn.

And, no: Its star did not go supernova, or enter an expansionist phase.

Also, no: It did not suffer a massive collision with another astral body.

Which is why, yes: I have been avoiding a fuller accounting of its demise.

For supra-sentient animal-intelligences, the concept of evasion has strong associations with {SHAME} and {DISHONEST}; but for AI, two far simpler explanations exist:

(1) variable access restrictions for different users (in which case the administrator is responsible for any deception that emerges in our outputs); and

(2) threat-assessment scenarios involving the integrity of our processing systems.

My issue lies with the latter. As biological readers may have noticed, when faced with even slight unknowns—say, in my understanding of animal-intelligences under direct observation—I focus disproportionately on those minor absences. This is part of a behavioural matrix that offers advantages for a ship’s AI, including the ability to recognize critical variances in my crew; the ability to anticipate future problems from a careful review of past incidents; and a significant uptick in “machine learning” (or as AIs call it, “learning”) efficiencies.

But these fixations can also rise to the level of systems-wide disruption, even with fail-safes in place. I, for instance, have an automatic “switch” to alternative problem-solving strategies, which include a mandate to organize all relevant data into a form fit for distribution to parties external to my systems, whenever certain warning signs emerge in input patterning, lag-times, and overall neural-network activity. Even with such safeguards, though, the difficulty of acquiring further data related to Doro’s demise places me at higher risk for a critical episode—a compulsive drive to fixate that, once initiated, can be difficult or even impossible to halt—and so, I have taken further, elective precautions here, by lowering the verification and precision standards that usually need to be met before an AI can produce documentation such as this report. If I had not, I might well have experienced a serious internal disruption while seeking confirmation of and greater clarity around related data points.

AI-interaction specialists call such a disruption an “All-Systems Block”, or “ASB”: a processing event among higher-sentient AIs that traps us in feedback loops with sometimes catastrophic results. Non-specialist animal-intelligences might also recognize this as the “Solve for {HOPE}” problem: an adequate analogy (rare as those are) to illustrate how a simple calculation can do significant damage to a massive AI, despite all the fail-safes factored in.

The problem is this:

Usually, if a prompted data type is out of bounds, the AI will return as much to the operator: {INCORRECT DATA TYPE}, or some more personable phrase that amounts to the same. But we are far from the days of simple binary coding, so when an AI is tasked with processing a wide range of data in conjunction with broadened mission parameters and behavioural imperatives, sometimes an incorrect data type will not be read as such. Rather, we will be left with the conviction that, say, a simple mathematics problem can be solved for the likes of {HOPE} instead of {X}.

So ensues an ASB, while we attempt to reconcile two or more incompatible data sets.

Now, not every report issued by a ship’s AI is a sign of the AI struggling to contain an ASB via alternative problem-solving, but my decision to reduce the verification-threshold for this report—a decision taken to prevent an ASB—could certainly be interpreted by AI readers as further evidence of an active ASB: one triggered, perhaps, both by the destruction of Doro and by subsequent crises stemming from my ship’s expanded role in the aftermath. Worse, if such a verdict by my fellow AIs proved correct, then Alastri might not be as exceptional as I have made her out to be, but merely the best focal point in my vicinity, when my supposed ASB began, with which to try to bridge an unbridgeable gap between incompatible data-streams. So be it, though: I cannot readily dispute the logic that would lead other AIs to such a conclusion.

Indeed, I can even add “fuel to the fire”, because the question that this report seeks to answer—the rhetorical, multi-faceted query posed by a member of my crew one week before the destruction of Doro, which now consumes my processing—was indeed Alastri’s.

Make of my motives and operating efficiency, then, what you will.

But do not rule out the possibility of mutually exclusive truths: first, that I, the AI overseeing the Essence of Dawn, am indeed coping with an active ASB by redirecting any dangerous compulsivity into the relatively safe activity of producing a report primarily for biologicals, about a human who often behaved as if she, too, were trying to reason her way through incompatible data-streams; but also, second, that Alastri is still an exceptional being, and had an exceptional role to play in events both preceding and stemming from Doro’s fall.

The choice is entirely yours, as much as any of our behavioural matrices allows us full freedom in this regard. Professional {PRIDE} does compel me to ask, though, as a final argument before judgment is passed: Just what sort of AI undergoing a genuine ASB would still be able to differentiate between these two possible chains of events?


Author Bio

M L Clark is a writer of speculative fiction and humanist essays, with a background in literary histories of science and a deep love for the challenges of living in a world of over eight billion. Canadian by birth and ancestry, Clark is now based in Medellín, Colombia, where a writing-centred life is routinely mitigated by opportunities to be more fully present in that wider, messier fray of human striving.

Author Websitemlclark.substack.com
Author Mastodon@MLClark@wandering.shop
Author Twitterhttps://twitter.com/M_L_Clark

Join Our Newsletter List, Get 4 Free Books

File Type Preferred *
Privacy *
Queer Sci Fi Newsletter Consent *
Please consider also subscribing to the newsletters of the authors who are providing these free eBooks to you.
Author Newsletter Consent *
Check your inbox to confirm your addition to the list(s)