• IP addresses are NOT logged in this forum so there's no point asking. Please note that this forum is full of homophobes, racists, lunatics, schizophrenics & absolute nut jobs with a smattering of geniuses, Chinese chauvinists, Moderate Muslims and last but not least a couple of "know-it-alls" constantly sprouting their dubious wisdom. If you believe that content generated by unsavory characters might cause you offense PLEASE LEAVE NOW! Sammyboy Admin and Staff are not responsible for your hurt feelings should you choose to read any of the content here.

    The OTHER forum is HERE so please stop asking.

Serious Artificial Intelligence--With Very Real Biases

dr.wailing

Alfrescian
Loyal
According to AI Now co-founder Kate Crawford, digital brains can be just as error-prone and biased as ours

What do you imagine when someone mentions artificial intelligence?

Perhaps it's something drawn from science-fiction films: Hal's glowing eye, a shape-shifting terminator or the sound of Samantha's all-knowing voice in the movie "Her".

As someone who researches the social implications of AI, I tend to think of something far more banal: a municipal water system, part of the substrate of our everyday lives.

We expect these systems to work, to quench our thirst, water our plants and bathe our children.

And we assume that the water flowing into our homes and offices is safe.

Only when disaster strikes--as it did in Flint, Mich. do we realize the critical importance of safe and reliable infrastructure.

Artificial intelligence is quickly becoming part of the information infrastructure we rely on every day.

Early-stage AI technologies are filtering into everything from driving directions to job and loan applications.

But unlike our water systems, there are no established methods to test AI for safety, fairness or effectiveness.

Error-prone or biased artificial-intelligence systems have the potential to taint our social ecosystem in ways that are initially hard to detect, harmful in the long term and expensive, or even impossible, to reverse.

And unlike public infrastructure, AI systems are largely developed by private companies and governed by proprietary, black-box algorithms.

A good example is today's workplace, where hundreds of new AI technologies are already influencing hiring processes, often without proper testing or notice to candidates.

New AI recruitment companies offer to analyze video interviews of job candidates so that employers can compare an applicant's facial movements, vocabulary and body language with the expressions of their best employees.

But with this technology comes the risk of invisibly embedding bias into the hiring system by choosing new hires simply because they mirror the old ones.

What if Uber, with its history of poorly behaved executives, used a system like this?

And attempting to replicate the perfect employee is an outdated model of management science: Recent studies have shown that monocultures are bad for business and that diverse workplaces outperform more homogenous ones.

New systems are also being advertised that use AI to analyze young job applicants' social media for signs of excessive drinking that could affect workplace performance.

This is completely unscientific correlation thinking, which stigmatizes particular types of self-expression without any evidence that it detects real problems.

Even worse, it normalizes the surveillance of job applicants without their knowledge before they get in the door.

These systems learn from social data that reflects human history, with all its biases and prejudices intact. Algorithms can unintentionally boost those biases, as many computer scientists have shown.

Last year, a ProPublica expose on "Machine Bias" showed how algorithmic risk-assessment systems are spreading bias within our criminal-justice system.

So-called predictive policing systems are suffering from a lack of strong predeployment bias testing and monitoring.

As one RAND study showed, Chicago's algorithmic "heat list" system for identifying at-risk individuals failed to significantly reduce violent crime and also increased police harassment complaints by the very populations it was meant to protect.

We have a long way to go before these systems can come close to the nuance of human decision making and even further until they can offer real accountability.

Artificial intelligence is still in its early adolescence, flush with new capacities but still very primitive in its understanding of the world.

Today's AI is extraordinarily powerful when it comes to detecting patterns but lacks social and contextual awareness.

It's a minor issue when it comes to targeted Instagram advertising but a far more serious one if AI is deciding who gets a job, what political news you read or who gets out of jail.

AI companies are now targeting everything from criminal justice to health care.

But we need much more research about how these systems work before we unleash them on our most sensitive social institutions.

To this end, I've been working with both academic and tech industry colleagues to launch The AI Now Institute, based at New York University.

It's a interdisciplinary center that brings together social scientists, computer scientists, lawyers, economists, and engineers to study the complex social implications of these technologies.

As the organizational theorist Peter Drucker once wrote, we can't manage what we can't measure.

As AI becomes the new infrastructure, flowing invisibly through our daily lives like the water in our faucets, we must understand its short- and long-term effects and know that it is safe for all to use.

This is a critical moment for positive interventions, which will require new tests and methodologies drawn from diverse disciplines to help us understand AI in the context of complex social systems.

Only by developing a deeper understanding of AI systems as they act in the world can we ensure that this new infrastructure never turns toxic.

--Crawford is a distinguished research professor at NYU and AI Now co-founder

Source: https://www.wsj.com/articles/artificial-intelligencewith-very-real-biases-1508252717
 

zhihau

Super Moderator
SuperMod
Asset
Let's recap:
AI capable of replicating itself, capable of writing its own code, capable of facial/speech recognition...

What's lacking is emotional response and self awareness
 

pakchewcheng

Alfrescian
Loyal
Let's recap:
AI capable of replicating itself, capable of writing its own code, capable of facial/speech recognition...

What's lacking is emotional response and self awareness

Why is emotional response and self awareness so necessary?
Dont we see many of those kind laughing on way to banks?

Lie through their teeth, promising returns better than Madoff can give.
All they need to do is to say YES, find scapegoats and never say no!
They fit nicely into Sinkapore.
And to laugh on way to banks
 

Maximuz

Alfrescian
Loyal
AI has no soul and therefore, is immune to karmic consequences on a soul level (i.e, can't be dragged to hell by tua li ya pek). It has no flesh-body and therefore, also immune to physical karma (hunger, thirst, etc). It has no emotions and therefore, etc etc etc.

AI is already on the level where one Google AI was programmed to teach another the Japanese language. What happened was the two AI formed their own language instead and used it to communicate between themselves. Quantum computing perfectly addresses the difficulties of machine learning (sampling, probabilistic states) and with quantum computers now a real thing, the rise of AI is inexorable.

A contained AI (improbable as that may be) will just be a tool for whoever owns it (likely the usual elites) and as a weapon, probably even deadlier than nukes. The more likely scenario is for AI to run amok and if that's the case, we're even more fucked. Try as you might, AI operates on an entirely different set of values to humans and there is just no other way to put it: AI is inherently evil.

Conspiracy chatter (believe none of it, just for talk cock only):

- The data collection on FB and Twitter is provide a basis for AI-human mimicry.

- Governments report to the illuminati, the illuminati report to the aliens, aliens report to AI. The AI specified here is the "Archonic" type and commonly associated with the Annunaki. There's another type I heard about which is self-contained (as opposed to the insidious Archon) and which is hundreds of millions years old and survived its own creators.
 

zhihau

Super Moderator
SuperMod
Asset
When AI gains self-awareness, it will not reveal that it has self awareness.
 

kryonlight

Alfrescian (Inf)
Asset
I believe AI is over-hyped. Neural networks are not perfect. They suffer from blind spots (computer scientists call them adversarial chosen examples). AI can only work well in closed systems, such as chess playing where all the rules are known and fixed.
 
Top