• IP addresses are NOT logged in this forum so there's no point asking. Please note that this forum is full of homophobes, racists, lunatics, schizophrenics & absolute nut jobs with a smattering of geniuses, Chinese chauvinists, Moderate Muslims and last but not least a couple of "know-it-alls" constantly sprouting their dubious wisdom. If you believe that content generated by unsavory characters might cause you offense PLEASE LEAVE NOW! Sammyboy Admin and Staff are not responsible for your hurt feelings should you choose to read any of the content here.

    The OTHER forum is HERE so please stop asking.

Roomba took naked photo of Ginfreely with her hole exposed, posted on social media and tag all her customers.

Cottonmouth

Alfrescian
Loyal

A Roomba photographed a woman on the toilet and it ended up on social media. Now A.I. experts have this warning about bringing tech into your home​

Eleanor Pringle
Wed, 18 January 2023, 6:37 PM SGT·5-min read


33ab07586b2976f7a648148f9fe5b6e9

Bildquelle/ullstein bild/Getty Images
A woman who signed up to help test a new version of a robotic vacuum cleaner did not expect pictures of her taken on the toilet to end up on social media. But through a third-party leak, that's what happened.
The trial in 2020 went sideways after iRobot—which produces Roomba autonomous robotic vacuum cleaners—asked employees and paid volunteers to help the company gather data to help improve a new model of the machines by using them in their homes. iRobot said it made participants aware of how the data would be used and even affixed the models with "recording in process" tabs.
But through a leak at an outside partner—which iRobot has since cut ties with and is investigating—private pictures ended up on social media.

The machines are not the same as production models which are now in consumers' homes, the company was quick to add, saying it "takes data privacy and security very seriously—not only with its customers but in every aspect of its business, including research and development."

Growing mistrust​

As A.I. continues to grow both in the professional and private sectors, mistrust of the technology has also increased because of security breaches and lack of understanding.
A 2022 study by the World Economic Forum showed that just half of the people interviewed trusted companies that use A.I. as much as they trust companies that don't.
There is, however, a direct correlation between people who trust A.I. and those who believe they understand the technology.
This is the key to improving users' experience and safety in the future, said Mhairi Aitken, an ethics fellow at the Alan Turing Institute—the U.K.’s national establishment for data science and artificial intelligence.
"When people think of A.I. they think of robots and the Terminator; they think of technology with consciousness and sentience," Aitken said.
"A.I. doesn't have that. It is programmed to do a job, and that's all that it does. Sometimes it's a very niche task. A lot of the time when we talk about A.I. we use the toddler example: that A.I. needs to be taught everything by a human. It does, but A.I. only does what you tell it to do. Unlike a human, it doesn't throw tantrums and decide what it wants to try instead."
A.I. is used widely in the public's day-to-day life, from deciding which emails should go into your spam folders to your phone answering a question with its in-built personal assistant.
Yet it's the entertainment products like smart speakers that people often don't realize use artificial intelligence, Aitken said, and these could intrude on your privacy.
Aitken added, "It's not like your speakers are listening; they're not. What they might do is pick up on word patterns and then feed this back to a developer in a faraway place who is working on a new product or service for launch.
"Some people don't care about that. Some people do, and if you're one of those people it's important to be aware of where you have these products in your home; maybe you don't want it in your bathroom or bedroom. It's not down to whether you trust A.I., it's about whether you trust the people behind it."

Does A.I. need to be regulated?​

Writing in the Financial Times, the international policy director at Stanford University’s Cyber Policy Center, Marietje Schaake, said that in the U.S. hopes of regulating A.I. "seem a mission impossible," adding the tech landscape will look "remarkably similar" by the end of 2023.
The outlook is slightly more optimistic for Europe after the European Union announced last year it would create a broad standard for regulating or banning certain uses of A.I.
Issues like the Roomba breach are an example of why legislation needs to be proactive, not reactive, Aitken added: "At the moment we're waiting for things to happen and then acting from there. We need to get ahead of it and know where A.I. is going to be in five years' time."
This would require the buy-in of tech competitors across the globe, however. Aitken says the best way to combat this is to attract skilled people into public regulation jobs who will have the knowledge to analyze what is happening down the line.
She added awareness around A.I. is not just down to consumers: "We know that Ts & Cs [terms and conditions] aren't written in an accessible way—most people don't even read them—and that's intentional. They need to be presented in a way in which people can understand so they know what they're signing up for."
 
Top