Mindreader API,built-in webcams and Minority Report ??

for all net-related stuff
Post Reply
User avatar
gordonrussell
admin
Posts: 269
Joined: Sat Oct 22, 2011 5:26 am
Location: Glasgow UK

Mindreader API,built-in webcams and Minority Report ??

Post by gordonrussell »

Call me paranoid, call me " a glass half full" man but I do wonder.
"I feel like this technology can enable us to give everybody a non-verbal voice, leverage the power of the crowd," Dr Rana el Kaliouby a member of the Media Lab's Affective Computing group.
High ideals which I applaud.... but somehow the word leverage makes me cringe a little.

Here is an extract from New Scientist:

Face-reading software to judge the mood of the masses

28 May 2012 by Lisa Grossman
Magazine issue 2866. Subscribe and save

Systems that can identify emotions in images of faces might soon collate millions of peoples' reactions to events and could even replace opinion polls

IF THE computers we stare at all day could read our faces, they would probably know us better than anyone.

That vision may not be so far off. Researchers at the Massachusetts Institute of Technology's Media Lab are developing software that can read the feelings behind facial expressions. In some cases, the computers outperform people. The software could lead to empathetic devices and is being used to evaluate and develop better adverts.

But the commercial uses are just "the low-hanging fruit", says Rana el Kaliouby, a member of the Media Lab's Affective Computing group. The software is getting so good and so easy to use that it could collate millions of peoples' reactions to an event as they sit watching it at home, potentially replacing opinion polls, influencing elections and perhaps fuelling revolutions.

"I feel like this technology can enable us to give everybody a non-verbal voice, leverage the power of the crowd," el Kaliouby says. She and her colleagues have developed a program called MindReader that can interpret expressions on the basis of a few seconds of video. The software tracks 22 points around the mouth, eyes and nose, and notes the texture, colour, shape and movement of facial features. The researchers used machine-learning techniques to train the software to tell the difference between happiness and sadness, boredom and interest, disgust and contempt. In tests to appear in the IEEE Transactions on Affective Computing, the software proved to be better than humans at telling joyful smiles from frustrated smiles. A commercial version of the system, called Affdex, is now being used to test adverts (see "Like what you see?").

Collecting emotional reactions in real time from millions of people could profoundly affect public polling. El Kaliouby, who is originally from Egypt, was in Cairo during the uprising against then-president Hosni Mubarak in 2011. She was startled that Mubarak seemed to think people liked his presidency, despite clear evidence to the contrary.

"She thought maybe Mubarak didn't think a million people was a big enough response to believe that people are upset," lab director Rosalind Picard said at the lab's spring meeting on 25 April. "There are 80 million people in Egypt, and most of them were not there. If we could allow them the opportunity to safely and anonymously opt in and give their non-verbal feedback and join that conversation, that would be very powerful."

Pollsters could even collect facial reactions on the streets, or analyse the reaction of an audience listening to a politician's speech. Picard's group recently ran an MIT-wide experiment called Mood Meter, placing cameras all over campus to gauge the general mood. To preserve privacy, the cameras didn't store any video or record faces - they just counted the number of people in the frame, and how many were smiling.

Frank Newport, editor in chief of political polling firm Gallup, headquartered in Washington DC, says such software could be useful. "There's no question that emotions and instincts have an impact in politics," he says. "We're certainly open to looking at anything along those lines." But he'd want to know how well facial responses predict actual votes.

Picard worries that the technology might have a dark side. "My fear is that some of these dictators would want to blow away the village that doesn't like them," she says. It would be important to protect the identities and IP addresses of viewers, she says.


Like what you see?

In 2009, MIT researchers Rosalind Picard and Rana el Kaliouby co-founded Affectiva in Waltham, Massachusetts, to commercialise their facial-recognition research.

Since launching a project to record viewers' facial reactions to Super Bowl adverts in February, they have collected more than 40 million frames of people responding to what they see. Facial expressions and head position are picked up by the user's webcam and then processed to gauge emotion.

The adverts that were tested can be viewed on the company website, as can graphs of the audience response, grouped by age. The idea is to give advertisers a fast, accurate response to campaigns.


Here is an extract on the MindReader API :

People express and communciate their mental states, including emotions, thoughts, and desires
through facial expressions, vocal nuances, gestures and other nonverbal channels. This is true
even when they are interacting with machines. Our mental states shape the decisions that we make, govern how we
communicate with others, and influences attention, memory and behavior. Thus, our ability to read nonverbal
cues is essential to understanding,
analyzing, and predicting the actions and intentions
of others, and is known, in the pyschology and cognitive science literature,
as "theory of mind" or ~mind-reading.

MindReader API enables the real time analysis, tagging and inference of cognitive-affective
mental states from facial video. The API builds on Rana el Kaliouby's doctoral research, which presents a computational model of mind reading as a framework for machine
perception and mental state recognition. This framework combines bottom-up vision-based
processing of the face (e.g. a head nod or smile) with top-down predictions of mental state models
(e.g. interest and confusion) to interpret the meaning underlying head and facial signals over time.
A multilevel, probabilistic architecture (using Dynamic Bayesian Networks) models the hierarchical way with
which people perceive facial and other human behavior
and handles the uncertainty inherent in the process of attributing mental
states to others. The output probabilities represent a rich modality that
technology can use to represent a person’s state and respond accordingly.
Using Google's face tracker (formerly NevenVision), 24 feature points are located and tracked on the face. Next,
motion, shape and color deformations of these features are used to identify 20 facial and head movements (e.g., head pitch, lip corner pull)
and communicative gestures (e.g., head nod, smile, eyebrow flash). Dynamic Bayesian Networks model
these head and facial movements over time, and infer the person's affective-cognitive state.

Links


https://www.newscientist.com/article/mg21428665.400-facereading-software-to-judge-the-mood-of-the-masses.html

https://web.media.mit.edu/~kaliouby/API.html
User avatar
Brown Sauce
admin
Posts: 1453
Joined: Sun Jan 07, 2007 3:40 pm

Post by Brown Sauce »

google is getting more and more sinister every day ..
User avatar
major.tom
Macho Business Donkey Wrestler
Posts: 1970
Joined: Sun Jan 21, 2007 7:07 pm
Location: BC, Canada

Post by major.tom »

That article reminded me of this:

<iframe width="640" height="360" src="https://www.youtube.com/embed/f_f5wNw-2c0?rel=0" frameborder="0"></iframe>

Install the Collusion addon for Firefox and see how many other sites are piggy-backing the sites you visit. I also use a privacy addon called Noscript (it allows you to only allow script access to the sites you select) and turn off 3rd party cookies.
User avatar
gordonrussell
admin
Posts: 269
Joined: Sat Oct 22, 2011 5:26 am
Location: Glasgow UK

Post by gordonrussell »

Thanks for that very informative Gary Kovacs clip.

Here's more potential advertisers' voyeurism :
[web]https://www.techradar.com/news/televisio ... ds-1084247[/web]
However, it's said that Intel believes its hardware's ability to gather data on consumers would represent a major boon for cable providers, who currently have to rely on outmoded Nielsen ratings information from a limited sample of the US population.
https://www.theverge.com/2012/6/8/3072229/intel-planning-tv-platform-with-targeted-ads-via-facial-recognition
User avatar
gordonrussell
admin
Posts: 269
Joined: Sat Oct 22, 2011 5:26 am
Location: Glasgow UK

Post by gordonrussell »

Brown Sauce wrote:google is getting more and more sinister every day ..
[web]https://www.bbc.co.uk/news/technology-18782224[/web]

and with Google possibly giving Microsoft a kick in the gahoulies
https://www.bbc.co.uk/news/business-18917906

we might be well to keep an eye on Goo gle .........and Microsoft of course.
Post Reply