Earthpages.org

The Real Alternative


Leave a comment

The ‘internet of things’ is sending us back to the Middle Ages

File 20170831 22397 1vtyfmc

Is this our relationship to tech companies now?

Queen Mary Master

Joshua A.T. Fairfield, Washington and Lee University

Internet-enabled devices are so common, and so vulnerable, that hackers recently broke into a casino through its fish tank. The tank had internet-connected sensors measuring its temperature and cleanliness. The hackers got into the fish tank’s sensors and then to the computer used to control them, and from there to other parts of the casino’s network. The intruders were able to copy 10 gigabytes of data to somewhere in Finland.

By gazing into this fish tank, we can see the problem with “internet of things” devices: We don’t really control them. And it’s not always clear who does – though often software designers and advertisers are involved.

In my recent book, “Owned: Property, Privacy and the New Digital Serfdom,” I discuss what it means that our environment is seeded with more sensors than ever before. Our fish tanks, smart televisions, internet-enabled home thermostats, Fitbits and smartphones constantly gather information about us and our environment. That information is valuable not just for us but for people who want to sell us things. They ensure that internet-enabled devices are programmed to be quite eager to share information.

Take, for example, Roomba, the adorable robotic vacuum cleaner. Since 2015, the high-end models have created maps of its users’ homes, to more efficiently navigate through them while cleaning. But as Reuters and Gizmodo reported recently, Roomba’s manufacturer, iRobot, may plan to share those maps of the layouts of people’s private homes with its commercial partners.

Security and privacy breaches are built in

Like the Roomba, other smart devices can be programmed to share our private information with advertisers over back-channels of which we are not aware. In a case even more intimate than the Roomba business plan, a smartphone-controllable erotic massage device, called WeVibe, gathered information about how often, with what settings and at what times of day it was used. The WeVibe app sent that data back to its manufacturer – which agreed to pay a multi-million-dollar legal settlement when customers found out and objected to the invasion of privacy.

Those back-channels are also a serious security weakness. The computer manufacturer Lenovo, for instance, used to sell its computers with a program called “Superfish” preinstalled. The program was intended to allow Lenovo – or companies that paid it – to secretly insert targeted advertisements into the results of users’ web searches. The way it did so was downright dangerous: It hijacked web browsers’ traffic without the user’s knowledge – including web communications users thought were securely encrypted, like connections to banks and online stores for financial transactions.

The underlying problem is ownership

One key reason we don’t control our devices is that the companies that make them seem to think – and definitely act like – they still own them, even after we’ve bought them. A person may purchase a nice-looking box full of electronics that can function as a smartphone, the corporate argument goes, but they buy a license only to use the software inside. The companies say they still own the software, and because they own it, they can control it. It’s as if a car dealer sold a car, but claimed ownership of the motor.

This sort of arrangement is destroying the concept of basic property ownership. John Deere has already told farmers that they don’t really own their tractors but just license the software – so they can’t fix their own farm equipment or even take it to an independent repair shop. The farmers are objecting, but maybe some people are willing to let things slide when it comes to smartphones, which are often bought on a payment installment plan and traded in as soon as possible.

How long will it be before we realize they’re trying to apply the same rules to our smart homes, smart televisions in our living rooms and bedrooms, smart toilets and internet-enabled cars?

A return to feudalism?

The issue of who gets to control property has a long history. In the feudal system of medieval Europe, the king owned almost everything, and everyone else’s property rights depended on their relationship with the king. Peasants lived on land granted by the king to a local lord, and workers didn’t always even own the tools they used for farming or other trades like carpentry and blacksmithing.

Over the centuries, Western economies and legal systems evolved into our modern commercial arrangement: People and private companies often buy and sell items themselves and own land, tools and other objects outright. Apart from a few basic government rules like environmental protection and public health, ownership comes with no trailing strings attached.

This system means that a car company can’t stop me from painting my car a shocking shade of pink or from getting the oil changed at whatever repair shop I choose. I can even try to modify or fix my car myself. The same is true for my television, my farm equipment and my refrigerator.

Yet the expansion of the internet of things seems to be bringing us back to something like that old feudal model, where people didn’t own the items they used every day. In this 21st-century version, companies are using intellectual property law – intended to protect ideas – to control physical objects consumers think they own.

Intellectual property control

My phone is a Samsung Galaxy. Google controls the operating system and the Google Apps that make an Android smartphone work well. Google licenses them to Samsung, which makes its own modification to the Android interface, and sublicenses the right to use my own phone to me – or at least that is the argument that Google and Samsung make. Samsung cuts deals with lots of software providers which want to take my data for their own use.

But this model is flawed, in my view. We need the right to fix our own property. We need the right to kick invasive advertisers out of our devices. We need the ability to shut down the information back-channels to advertisers, not merely because we don’t love being spied on, but because those back doors are security risks, as the stories of Superfish and the hacked fish tank show. If we don’t have the right to control our own property, we don’t really own it. We are just digital peasants, using the things that we have bought and paid for at the whim of our digital lord.

Even though things look grim right now, there is hope. These problems quickly become public relations nightmares for the companies involved. And there is serious bipartisan support for right-to-repair bills that restore some powers of ownership to consumers.

The ConversationRecent years have seen progress in reclaiming ownership from would-be digital barons. What is important is that we recognize and reject what these companies are trying to do, buy accordingly, vigorously exercise our rights to use, repair and modify our smart property, and support efforts to strengthen those rights. The idea of property is still powerful in our cultural imagination, and it won’t die easily. That gives us a window of opportunity. I hope we will take it.

Joshua A.T. Fairfield, Professor of Law, Washington and Lee University

This article was originally published on The Conversation. Read the original article.

  Robot caregivers for the elderly could be just 10 years away (businessinsider.com)

  Your Roomba could be selling maps of your home to Google, Amazon, and Apple (mashable.com)

  Robot vacuum cleaner plans to sell maps of people’s homes (telegraph.co.uk)

  MIT team develops robot that might be a more socially aware pedestrian than some humans (neowin.net)

Advertisements


Leave a comment

Melding mind and machine: How close are we?

Image 20170408 2918 1u1y3bz
A noninvasive brain-computer interface based on EEG recordings from the scalp.
Center for Sensorimotor Neural Engineering (CSNE), Photo by Mark Stone, CC BY-ND

James Wu, University of Washington and Rajesh P. N. Rao, University of Washington

Just as ancient Greeks fantasized about soaring flight, today’s imaginations dream of melding minds and machines as a remedy to the pesky problem of human mortality. Can the mind connect directly with artificial intelligence, robots and other minds through brain-computer interface (BCI) technologies to transcend our human limitations? The Conversation

Over the last 50 years, researchers at university labs and companies around the world have made impressive progress toward achieving such a vision. Recently, successful entrepreneurs such as Elon Musk (Neuralink) and Bryan Johnson (Kernel) have announced new startups that seek to enhance human capabilities through brain-computer interfacing.

How close are we really to successfully connecting our brains to our technologies? And what might the implications be when our minds are plugged in?

How do brain-computer interfaces work and what can they do?

Origins: Rehabilitation and restoration

Eb Fetz, a researcher here at the Center for Sensorimotor Neural Engineering (CSNE), is one of the earliest pioneers to connect machines to minds. In 1969, before there were even personal computers, he showed that monkeys can amplify their brain signals to control a needle that moved on a dial.

Much of the recent work on BCIs aims to improve the quality of life of people who are paralyzed or have severe motor disabilities. You may have seen some recent accomplishments in the news: University of Pittsburgh researchers use signals recorded inside the brain to control a robotic arm. Stanford researchers can extract the movement intentions of paralyzed patients from their brain signals, allowing them to use a tablet wirelessly.

Similarly, some limited virtual sensations can be sent back to the brain, by delivering electrical current inside the brain or to the brain surface.

What about our main senses of sight and sound? Very early versions of bionic eyes for people with severe vision impairment have been deployed commercially, and improved versions are undergoing human trials right now. Cochlear implants, on the other hand, have become one of the most successful and most prevalent bionic implants – over 300,000 users around the world use the implants to hear.

A bidirectional brain-computer interface (BBCI) can both record signals from the brain and send information back to the brain through stimulation.
Center for Sensorimotor Neural Engineering (CSNE), CC BY-ND

The most sophisticated BCIs are “bi-directional” BCIs (BBCIs), which can both record from and stimulate the nervous system. At our center, we’re exploring BBCIs as a radical new rehabilitation tool for stroke and spinal cord injury. We’ve shown that a BBCI can be used to strengthen connections between two brain regions or between the brain and the spinal cord, and reroute information around an area of injury to reanimate a paralyzed limb.

With all these successes to date, you might think a brain-computer interface is poised to be the next must-have consumer gadget.

Still early days

An electrocorticography grid, used for detecting electrical changes on the surface of the brain, is being tested for electrical characteristics.
Center for Sensorimotor Neural Engineering, CC BY-ND

But a careful look at some of the current BCI demonstrations reveals we still have a way to go: When BCIs produce movements, they are much slower, less precise and less complex than what able-bodied people do easily every day with their limbs. Bionic eyes offer very low-resolution vision; cochlear implants can electronically carry limited speech information, but distort the experience of music. And to make all these technologies work, electrodes have to be surgically implanted – a prospect most people today wouldn’t consider.

Not all BCIs, however, are invasive. Noninvasive BCIs that don’t require surgery do exist; they are typically based on electrical (EEG) recordings from the scalp and have been used to demonstrate control of cursors, wheelchairs, robotic arms, drones, humanoid robots and even brain-to-brain communication.

The first demonstration of a noninvasive brain-controlled humanoid robot “avatar” named Morpheus in the Neural Systems Laboratory at the University of Washington in 2006. This noninvasive BCI infers what object the robot should pick and where to bring it based on the brain’s reflexive response when an image of the desired object or location is flashed.

But all these demos have been in the laboratory – where the rooms are quiet, the test subjects aren’t distracted, the technical setup is long and methodical, and experiments last only long enough to show that a concept is possible. It’s proved very difficult to make these systems fast and robust enough to be of practical use in the real world.

Even with implanted electrodes, another problem with trying to read minds arises from how our brains are structured. We know that each neuron and their thousands of connected neighbors form an unimaginably large and ever-changing network. What might this mean for neuroengineers?

Imagine you’re trying to understand a conversation between a big group of friends about a complicated subject, but you’re allowed to listen to only a single person. You might be able to figure out the very rough topic of what the conversation is about, but definitely not all the details and nuances of the entire discussion. Because even our best implants only allow us to listen to a few small patches of the brain at a time, we can do some impressive things, but we’re nowhere near understanding the full conversation.

There is also what we think of as a language barrier. Neurons communicate with each other through a complex interaction of electrical signals and chemical reactions. This native electro-chemical language can be interpreted with electrical circuits, but it’s not easy. Similarly, when we speak back to the brain using electrical stimulation, it is with a heavy electrical “accent.” This makes it difficult for neurons to understand what the stimulation is trying to convey in the midst of all the other ongoing neural activity.

Finally, there is the problem of damage. Brain tissue is soft and flexible, while most of our electrically conductive materials – the wires that connect to brain tissue – tend to be very rigid. This means that implanted electronics often cause scarring and immune reactions that mean the implants lose effectiveness over time. Flexible biocompatible fibers and arrays may eventually help in this regard.

Co-adapting, cohabiting

Despite all these challenges, we’re optimistic about our bionic future. BCIs don’t have to be perfect. The brain is amazingly adaptive and capable of learning to use BCIs in a manner similar to how we learn new skills like driving a car or using a touchscreen interface. Similarly, the brain can learn to interpret new types of sensory information even when it’s delivered noninvasively using, for example, magnetic pulses.

Learning to interpret and use artificial sensory information delivered via noninvasive brain stimulation.

Ultimately, we believe a “co-adaptive” bidirectional BCI, where the electronics learns with the brain and talks back to the brain constantly during the process of learning, may prove to be a necessary step to build the neural bridge. Building such co-adaptive bidirectional BCIs is the goal of our center.

We are similarly excited about recent successes in targeted treatment of diseases like diabetes using “electroceuticals” – experimental small implants that treat a disease without drugs by communicating commands directly to internal organs.

And researchers have discovered new ways of overcoming the electrical-to-biochemical language barrier. Injectible “neural lace,” for example, may prove to be a promising way to gradually allow neurons to grow alongside implanted electrodes rather than rejecting them. Flexible nanowire-based probes, flexible neuron scaffolds and glassy carbon interfaces may also allow biological and technological computers to happily coexist in our bodies in the future.

From assistive to augmentative

Elon Musk’s new startup Neuralink has the stated ultimate goal of enhancing humans with BCIs to give our brains a leg up in the ongoing arms race between human and artificial intelligence. He hopes that with the ability to connect to our technologies, the human brain could enhance its own capabilities – possibly allowing us to avoid a potential dystopian future where AI has far surpassed natural human capabilities. Such a vision certainly may seem far-off or fanciful, but we shouldn’t dismiss an idea on strangeness alone. After all, self-driving cars were relegated to the realm of science fiction even a decade and a half ago – and now share our roads.

A BCI can vary along multiple dimensions: whether it interfaces with the peripheral nervous system (a nerve) or the central nervous system (the brain), whether it is invasive or noninvasive and whether it helps restore lost function or enhances capabilities.
James Wu; adapted from Sakurambo, CC BY-SA

In a closer future, as brain-computer interfaces move beyond restoring function in disabled people to augmenting able-bodied individuals beyond their human capacity, we need to be acutely aware of a host of issues related to consent, privacy, identity, agency and inequality. At our center, a team of philosophers, clinicians and engineers is working actively to address these ethical, moral and social justice issues and offer neuroethical guidelines before the field progresses too far ahead.

Connecting our brains directly to technology may ultimately be a natural progression of how humans have augmented themselves with technology over the ages, from using wheels to overcome our bipedal limitations to making notations on clay tablets and paper to augment our memories. Much like the computers, smartphones and virtual reality headsets of today, augmentative BCIs, when they finally arrive on the consumer market, will be exhilarating, frustrating, risky and, at the same time, full of promise.

James Wu, Ph.D. Student in Bioengineering, Researcher at the Center for Sensorimotor Neural Engineering, University of Washington and Rajesh P. N. Rao, Professor of Computer Science and Engineering and Director of the Center for Sensorimotor Neural Engineering , University of Washington

This article was originally published on The Conversation. Read the original article.


6 Comments

Forget loving the alien… AI raises new questions about consciousness, the soul and love

Back in the 80s David Bowie’s song, “Loving the Alien” anticipated an idea which would become more mainstream with the proliferation of specialty TV and radio channels: Would it be possible for a human being to fall in love with an alien?

Today’s hot question again reflects pop culture and recent tech. Aliens are old hat. But computers, well, that’s a whole new vista. We’re seeing a lot more stories about the possibility of artificial intelligence possessing actual consciousness. And sci-fi movies and novels about human beings and machines falling in love are on the rise.

Whether or not AI really possesses consciousness is something we may never know. One could say that AI is just organized energy. And so are we. Therefore both have consciousness created by our respective degrees of energy organization.

Others, usually religious people, insist we have souls but machines do not. And the soul, they say, is the true center of consciousness. So soulless machines simply mimic consciousness.

But how do these religious believers know that God would not bestow souls on machines?

Can religious traditionalists be 100% sure?

Artificial Intelligence (John Cale album)

Artificial Intelligence (John Cale album) (Photo credit: Wikipedia)

If we look into the human body, it really is an electro-chemical apparatus. Those nerve impulses scientists are always talking about, well, they are transmitted through electrical changes within the body.

So fear not. If you happen to be falling in love with your computer or talking car, you just might not be a social misfit compensating through imaginary love.  And even if we never know for sure, the future no doubt will see closer links among men, women, and machines.


1 Comment

Synthetic human genome more than science fiction


2 Comments

Artificial Intelligence will have to learn like the rest of us

Interesting idea in the tweeted article:

It would take far too long to program every speech thread required for normal human conversation, so machines will have to ask the right questions when faced with uncertainty, and learn from the human answers.

That sounds great. But what about morality. Can a machine learn right from wrong? Or decide whether or not to save a child or a bumblebee from a natural disaster? I’m not sure. Part of the answer, I think, depends on whether or not AI would have some kind of soul or higher consciousness that transcends its circuits. Before we say that it doesn’t, it’s probably best to just say “we don’t know” and leave it there.

Myself, it seems like our car and computers have a personality of their own. Sure, I’m probably just projecting my own thoughts and feelings onto the machines… but … they are just organized energy… and so are we. So can we really be sure?

Something for future philosophers and, perhaps, social rights activists to ponder down the road.

–MC


Leave a comment

Time to teach ethics to artificial intelligence


Leave a comment

Robot love – Till your battery charges no more…

I tend to see myself as a fairly open-minded guy. But last night George Noory’s Coast to Coast AM took the cake. In passing I heard talk about the idea of human and robot marriage. I have wondered if AI possesses some kind of consciousness. But never have I considered this!

MC