Utopia and Dystopia in the Present

A friend once pointed out that he doesn’t watch films that portray either a dystopic future (i.e. Children of Men or Blade Runner) or a utopian ideal (i.e. Avatar or Gattaca) because they tend to be less than realistic.

There is a lot of talk (and writing) going around about the importance of either 1984 by George Orwell or Brave New World by Aldous Huxley as a literary or cultural guideposts in a time of rampant civic uncertainty and fear.

There are several problems with articulating–and living out–a worldview based off works by English authors of the early to mid-20th century, but the biggest problem of all is the mindset behind thinking that authors of a dystopian (or utopian) future can possibly provide any actionable wisdom in the present day.

The specific problems are best articulated by others, but the general issues that face believers searching for truth in any conception of utopia (or dystopia) are three-fold:

Utopia (or dystopia) look different based upon your frame of reference. This is the main problem in applying the logic of utopia (or dystopia) to fleeting present-day political disputes and disagreements, rather than seeking longer term wisdom. The fact is, for every person who views a position as a dystopic one, there is a person who at the least views the position as not a problem. And for some, they view the position through the frame of utopian thinking.

The dystopia (or utopia) that a person is looking for (typically one represented in film or literature) is rarely exactly the one that manifests in the real world. The specific tent poles of culture, politics, and societal considerations are fluid and dynamic, not static and solid. There are elements of utopia (or dystopia) that manifest, but not all of them. Not exactly. And the fact is, when there isn’t exactitude in the manifestation of a prediction, the credibility of the predictor (and by extension the reputation of the prediction itself) seems to fail miserably.

How people think about what’s happening now influences how they mentally construct utopias (or dystopias), and then emotionally “buy-in” to them. This mental and emotional construction is more of an analysis about the nature of a present condition of conflict, rather than about genuinely deep conflict analysis. This is why films and literature aren’t good predictors of what will happen, what can happen, or even what should happen.

The fundamentals that underlie films and literature about dystopias or utopias are snapshots in time, representing a particular conflict mindset, and a particular set of perspectives on the world and events in it.

We would do well to be skeptical of attempts to glean too much understanding of current events from them, and would do better at managing and engaging with the conflicts we are currently in, by dealing bravely with the utopia (or dystopia) we’re creating right now.

[Advice] To What End?

What matters the most?

Asking the right questions, or listening to the right answers?

What makes the most impact?

Personalized individual behavioral changes, or massive societal shifts?

When expanding and rapacious technological advancements and the human ability to ignore a crisis until it is impossible to manage its effects merge, the ability to bravely tear down an old system and replace it with another system, is the only skill that matters.

But if we don’t know what matters the most and if we can’t agree on what makes the most impact, then we can’t answer the last question, which becomes the most critical one to get right:

What outcome do we want to end up with?

[Podcast] The Death of F2F Communication

Our personal assistants have names like Cloe, Clara, Julie, Luka and Amy.

[Podcast] The Death of F2F Communication

[powerpress]

Our devices have names like Alexa, Siri and Cortana.

We are getting the future we were promised, though not evenly distributed (as has been pointed out in the past), and not in the same areas simultaneously. Soon, HAL 9000 will be in our homes, not in a deep space vehicle.

We have FitBits, Jawbones, and Apple and Android Watches. We are slowly getting augmented reality, virtual reality and even electric, automated self-driving cars.

Voice data, movement data, and biometric data collection technologies lie at the “bleeding edge” of future machine-to-human communication technologies. We do not have laws or regulations to deal with the consequences of having these devices; which are always on, always recording, always collecting and always reporting to someone—somewhere.

We have given up our privacy for convenience, and whether or not you believe this is a Faustian bargain, the deal is in the process of being struck even as you are alive and watching it happen. And the people of the future will not lament the loss of face-to-face communication, any more than present generations lament the passing of the horse and buggy.

How should conflict professionals respond to the death of face-to-face communication and the rise of machine-to-human communication?

  • Get involved in the collection of data, the organizations that collect it, and even on the boards of organizations that make decisions and regulations about the use of it—peace builders have an obligation to no longer sit on the sidelines, hoping that none of this will happen. Getting involved in all parts of the process, from creation ot decision making, is the new obligation for peace builders.
  • Build businesses that act as intermediaries (mediators, if you will) between Alexa, Siri and whatever is next and the people who will seek to control what those devices reveal about people’s private lives—private conflict communications are about to go public. And peace builders have seen the devastating effects of such publicity on relationships, reputation and understanding through the first level of all of this—social media.
  • Prepare to address the stress that will be magnified through people curating their lives, tailoring their responses to what “should” be said, rather than what will actually be “true”—with the death of privacy through all of your devices in your house either recording you, tracking you, suggesting items to you, or even interacting with you, the line between what is truly felt, and what you actually say, will become even narrower. Peace builders should prepare through training to address this cognitive dissonance, because it will only take a few generations before more masking of previously transparent communication will occur.

As man and machine begin to merge at the first level with communication, peace builders should be engaging with the process proactively and aggressively, rather than waiting and being caught by surprise.

-Peace Be With You All-

Jesan Sorrells, MA
Principal Conflict Engagement Consultant
Human Services Consulting and Training (HSCT)
Email HSCT: jsorrells@hsconsultingandtraining.com
Facebook: https://www.facebook.com/HSConsultingandTraining
Twitter: https://www.twitter.com/Sorrells79
LinkedIn: https://www.linkedin.com/in/jesansorrells/

[Strategy] Open A.I. Disagreements

In a world with responsive, predictive artificial intelligence, operating behind the veneer of the world in which humans operate, a philosophical question arises:

Will the very human tendency toward conflicts increase or decrease in a world where the frictions between us and the objects we have created is reduced?

From the Open A.I project to research being done at MIT, Google, and Facebook, the race is on to set the table for the technology of world of one hundred years from now.

As with all great advances in human development (and the development of artificial intelligence capabilities would rival going to the Moon) the applications of artificial intelligence at first will be bent towards satisfying our basest desires and human appetites and then move up the hierarchy of needs.

But a lot of this research and development is being done by scientists, developers, entrepreneurs, and others (technologists all) who—at least in their public pronouncements—seem to view people and our emotions, thoughts, feelings and tendencies toward irrationality and conflict, as a hindrance rather than as a partner.

Or, to put it in “computer speak”: In the brave new world of artificial intelligence research, humanity’s contributions–and decision making–is too often viewed as a bug, rather than as a feature.

However, design thinking demands that humans—and their messy irrational problems and conflicts—be placed at the center of such thinking rather than relegated to the boundaries and the edges. Even as humans create machines that can learn deeply, perform complex mathematics, created logical algorithms, and generate better solutions to complex future problems than the human who created the problems and conflicts in the first place.

Eventually, humans will create intelligence that will mimic our responses so closely that it will be hard to tell whether those responses are “live” or merely “Memorex.”

But until that day comes, mediators, arbitrators, litigators, social workers, therapists, psychologists, anthropologists, philosophers, poets, and writers, need to get into the research rooms, the think tanks and onto the boards of the foundations and the stages at the conferences, with the technologists to remind them that there is more to the future than mere mathematics.

Or else, the implications for the consequences of future conflicts (human vs. human and even machine vs. human) could be staggering.

-Peace Be With You All-

Jesan Sorrells, MA
Principal Conflict Engagement Consultant
Human Services Consulting and Training (HSCT)
Email HSCT: jsorrells@hsconsultingandtraining.com
Facebook: https://www.facebook.com/HSConsultingandTraining
Twitter: https://www.twitter.com/Sorrells79
LinkedIn: https://www.linkedin.com/in/jesansorrells/

[Podcast] The Death of F2F Communication

Our personal assistants have names like Cloe, Clara, Julie, Luka and Amy.

[Podcast] The Death of F2F Communication

Our devices have names like Alexa, Siri and Cortana.

We are getting the future we were promised, though not evenly distributed (as has been pointed out in the past), and not in the same areas simultaneously. Soon, HAL 9000 will be in our homes, not in a deep space vehicle.

We have FitBits, Jawbones, and Apple and Android Watches. We are slowly getting augmented reality, virtual reality and even electric, automated self-driving cars.

Voice data, movement data, and biometric data collection technologies lie at the “bleeding edge” of future machine-to-human communication technologies. We do not have laws or regulations to deal with the consequences of having these devices; which are always on, always recording, always collecting and always reporting to someone—somewhere.

We have given up our privacy for convenience, and whether or not you believe this is a Faustian bargain, the deal is in the process of being struck even as you are alive and watching it happen. And the people of the future will not lament the loss of face-to-face communication, any more than present generations lament the passing of the horse and buggy.

How should conflict professionals respond to the death of face-to-face communication and the rise of machine-to-human communication?

  • Get involved in the collection of data, the organizations that collect it, and even on the boards of organizations that make decisions and regulations about the use of it—peace builders have an obligation to no longer sit on the sidelines, hoping that none of this will happen. Getting involved in all parts of the process, from creation ot decision making, is the new obligation for peace builders.
  • Build businesses that act as intermediaries (mediators, if you will) between Alexa, Siri and whatever is next and the people who will seek to control what those devices reveal about people’s private lives—private conflict communications are about to go public. And peace builders have seen the devastating effects of such publicity on relationships, reputation and understanding through the first level of all of this—social media.
  • Prepare to address the stress that will be magnified through people curating their lives, tailoring their responses to what “should” be said, rather than what will actually be “true”—with the death of privacy through all of your devices in your house either recording you, tracking you, suggesting items to you, or even interacting with you, the line between what is truly felt, and what you actually say, will become even narrower. Peace builders should prepare through training to address this cognitive dissonance, because it will only take a few generations before more masking of previously transparent communication will occur.

As man and machine begin to merge at the first level with communication, peace builders should be engaging with the process proactively and aggressively, rather than waiting and being caught by surprise.

-Peace Be With You All-

Jesan Sorrells, MA
Principal Conflict Engagement Consultant
Human Services Consulting and Training (HSCT)
Email HSCT: jsorrells@hsconsultingandtraining.com
Facebook: https://www.facebook.com/HSConsultingandTraining
Twitter: https://www.twitter.com/Sorrells79
LinkedIn: https://www.linkedin.com/in/jesansorrells/

[Opinion] A Utopian Singularity

The release of nuclear power was greeted with a mixture of awe and triumph.

 

Splitting the atom was—at one time—the most difficult task that humanity had set itself upon completing.

Once the atom was split, however, and the power released from that act was applied to the making of war and the destruction of human lives, in order to—ostensibly—prevent the loss of other human lives, humanity recoiled in horror at that which we had accomplished.

Robert Oppenheimer’s words at the Trinity test ring down through to our time: “ Now I am become Death, the destroyer of worlds.”

And now, we have arrived at yet another linchpin moment in human history. Just as the act of splitting the atom and releasing it’s energy was supposed to bring humanity closer to a utopian peace, we are now at a moment where very smart people are promising us that we are ready to release the potential of AI and many other technologies.

They promise us a jobless future of endless prosperity, with at least our basic needs completely fulfilled.

They promise us a future of 3D printed food, self-driving cars, predictive machines that will learn what we need and provide it to us without question.

They promise us a future where there will be haves and have-nots, but that they line between the elite and the commoners will be the same as those who can defeat—or prolong—their own deaths through genetic manipulation, and those who know that the technology is out there to do this, and cannot get it.

But, in the midst of all of these promises—remarkably similar to the many promises made to humanity by well meaning smart people (like Robert Oppenheimer) before we released atomic power—they do not ask the truly existential questions the release of such technologies creates:

What’s most disturbing to us is that none of the really smart people in genetics, neurobiology, data analytics, computer and software technology or any of these other fields, seem to be interested in sitting down with a few philosophers, religious practitioners and policy makers to even discuss the questions in the first place.

To quote another famous man: “Your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.”

Humanity’s progress is too important to be left alone in the hands of the very smart people.

-Peace Be With You All-

Jesan Sorrells, MA
Principal Conflict Engagement Consultant
Human Services Consulting and Training (HSCT)
Email HSCT: jsorrells@hsconsultingandtraining.com
Facebook: https://www.facebook.com/HSConsultingandTraining
Twitter: https://www.twitter.com/Sorrells79
LinkedIn: https://www.linkedin.com/in/jesansorrells/

The Missing Singularity

We don’t know if you’ve heard, but the singularity is coming.

The Missing Singularity

That moment in time when human beings unite with the technology that we have made and ascend gloriously to the stars…

Unless, of course, some of us decide to not unite gloriously…

Unless, of course, some of us decide to remain late (or non) adopters of the latest technology from the whiz kids at CalTech, Google or even Boston…

Unless, of course, we destroy ourselves—or a portion of the global culture we are rapidly building—in an effort to control or dominate an aspect, not yet going human hand-in-artificial intelligence hand, with technology.

Unless, of course, the human heart remains the same…

-Peace Be With You All-

Jesan Sorrells, MA
Principal Conflict Engagement Consultant
Human Services Consulting and Training (HSCT)
Email HSCT: hsconsultingandtraining@gmail.com
Facebook: https://www.facebook.com/HSConsultingandTraining
Twitter: www.twitter.com/Sorrells79
LinkedIn: www.linkedin.com/in/jesansorrells/

[Advice] All That Happens Must Be Known

Given revelations of internet data surveillance what concerns should be raised about the possibility of brain monitoring devices?

All this week on the HSCT Communication Blog, we are answering questions put forth by the folks running the at the upcoming Suny-Broome 6th Annual Interdisciplinary Conference being held on April 4thand 5th at Suny-Broome Community College.
This week’s question was posed by the plot of David Eggers’ most recent novel, The Circle, and was not definitively answered by the end of the book.
Well, we here at HSCT have three primary concerns about brain monitoring devices. And the NSA didn’t make the top three.
  • The first is around marketing and the idea of “opting-out” rather than a mandatory “opt-in.”
The most annoying moment on the internet or social media is waiting for the commercial at the front of a YouTube video to load, with the countdown going before the viewer can “skip this ad.”

As the customer (you and I) have gained more control over blocking being sold to, marketers and advertisers have had to come up with more clever (and blunt) ways to compel our valuable time and attention, with confusing and frustrating results for all parties involved.

Now imagine if marketers had access to the most intimate space on the planet: Your private brain space.There would be no “option to opt-out,” even though all the legalese would say that there would be.

Which gets us to point number 2…

  • The second concern that we have is that increasingly, the desire to not participate in social communication is seen as a sign of social ineptitude at best and dangerous at worst.
Case in point: Whenever a school shooting happens, the first thing that the media does is breathlessly report whether or not the perpetrator possessed a social media account.
If he (and it’s usually a ‘he’) does, then there is breathless data mining that goes on in a search for pathology, motive, and aberration.

In other words, the nature of the aberrant act itself is no longer enough to create outrage; the lack of social participation is the driver for primary outraged responses.Which leads to concern number 3…

  • The third concern is that we have long sought—as individuals, societies and cultures—to control people under the guise of freeing them from Plato’s Cave.

Brain monitoring devices won’t be used to give us freedom, collaboration and connection.Instead, they will be used to take away freedom, encourage and inflame false fracturing and individualization, and destroy connections between people.

In other words, criminalization of thought will happen using the same powerful social sanctioning to illegality continuum that has banned smoking from restaurants, trans fats from NY City restaurants, and has gotten the White House cook to quit.
The inevitability of technological progress demands that we think about the ramifications of power and control, not only from government and corporations, but also by and from each other.
So, HSCT’s conflict engagement consultant,  Jesan Sorrells, will be presenting on the issue of online reputation maintenance in a world where virtue and ethics are not often addressed.
Register for this FREE event here http://www.sunybroome.edu/web/ethics and stay for the day.
We would love to see you there!
-Peace Be With You All-
Jesan Sorrells, MA
Principal Conflict Engagement Consultant
Human Services Consulting and Training (HSCT)