when I want to be part of an experiment..i'll sign up....
Facebook Finally Admits 'Mood Altering' Study Not The Best Idea
The Huffington Post | By Alexis Kleinman
Posted: 10/02/2014 3:11 pm EDT Updated: 10/03/2014 1:59 pm EDT
Facebook finally just kind of apologized for manipulating the emotions of hundreds of thousands of people.
In a blog post published to Facebook, the company's chief technology officer expressed regret for a research experiment conducted on more than 689,000 Facebook users in 2012 in which news feeds were purposefully manipulated to alter people's moods. He also announced some changes the company plans to make to ensure future experiments won't be so creepy.
"It is clear now that there are things we should have done differently," Schroepfer writes. In the post, Schroepfer admits Facebook was "unprepared" for the backlash caused by the research. The experiment should have been more widely reviewed and better explained to users.
When word of the experiment leaked earlier this year, the public freaked out. Even the architect of the study admitted it was creepy, the Atlantic reported.
"This was part of ongoing research companies do to test different products, and that was what it was; it was poorly communicated," Facebook's chief operating officer Sheryl Sandberg said -- stopping short of apologizing for anything but the poor communication
In his post, Schroepfer goes farther than Sandberg but never explicitly says sorry. He also says Facebook will continue experimenting and researching our behavior.
"We're committed to doing research to make Facebook better, but we want to do it in the most responsible way," Schroepfer writes.
He explains that going forward Facebook will give researchers "clearer guidelines," create a panel of review for the study, increase education for researchers and create a public website for its published research.
........................................
a change of methods..isn't improvement..imo
http://time.com/3457305/facebook-mood-study/
Facebook Changing Research Methods After Controversial Mood Study
It is clear now that there are things we should have done differently"
Facebook has issued a mea culpa for a controversial experiment on its users that gained widespread attention over the summer, promising to revamp its research practices going forward.
In a blog post, Chief Technology Officer Mike Schroepfer acknowledged the social network mishandled a 2012 study that altered the types of posts some users saw in their News Feeds to in order to determine whether such a change would affect the emotional tone of their own posts. The results of the study were published this June, angering some users because no one gave prior consent for the study nor did it clear any kind of review board, a step typically undertaken by academic research organizations
"It is clear now that there are things we should have done differently," Schroepfer wrote. "For example, we should have considered other non-experimental ways to do this research. The research would also have benefited from more extensive review by a wider and more senior group of people. Last, in releasing the study, we failed to communicate clearly why and how we did it."
The company is now instituting a new framework for handling both internal experiments and research that may later be published. Research that is studying specific groups of people or relates to "deeply personal" content (such as emotions) will go through an "enhanced review process" before being approved. Facebook has also set up a panel of employees from different parts of the company, such as the privacy and legal teams, that will review potential research projects. The social network will also incorporate education on research practices into the introductory training that is given to new company engineers and present all the public research it conducts on a single website.
Facebook did not provide any detail on what the enhanced review process would look like or whether external auditors would review the company's research. The company also retains the right to conduct any experiments it deems appropriate through its data use policy.
........................................
http://www.theatlantic.com/technology/archive/2014/06/everything-we-know-about-facebooks-secret-mood-manipulation-experiment/373648/?single_page=true
Everything We Know About Facebook's Secret Mood Manipulation Experiment
It was probably legal. But was it ethical?
Robinson Meyer
Jun 28 2014, 2:51 PM ET
Updated, 09/08/14
Facebook's News Feed—the main list of status updates, messages, and photos you see when you open Facebook on your computer or phone—is not a perfect mirror of the world.
But few users expect that Facebook would change their News Feed in order to manipulate their emotional state.
We now know that's exactly what happened two years ago. For one week in January 2012, data scientists skewed what almost 700,000 Facebook users saw when they logged into its service. Some people were shown content with a preponderance of happy and positive words; some were shown content analyzed as sadder than average. And when the week was over, these manipulated users were more likely to post either especially positive or negative words themselves.
This tinkering was just revealed as part of a new study, published in the prestigious Proceedings of the National Academy of Sciences. Many previous studies have used Facebook data to examine "emotional contagion," as this one did. This study is different because, while other studies have observed Facebook user data, this one set out to manipulate it.
The experiment is almost certainly legal. In the company's current terms of service, Facebook users relinquish the use of their data for "data analysis, testing, [and] research." Is it ethical, though? Since news of the study first emerged, I've seen and heard both privacy advocates and casual users express surprise at the audacity of the experiment
We're tracking the ethical, legal, and philosophical response to this Facebook experiment here. We've also asked the authors of the study for comment. Author Jamie Guillory replied and referred us to a Facebook spokesman. Early Sunday morning, a Facebook spokesman sent this comment in an email:
This research was conducted for a single week in 2012 and none of the data used was associated with a specific person's Facebook account. We do research to improve our services and to make the content people see on Facebook as relevant and engaging as possible. A big part of this is understanding how people respond to different types of content, whether it's positive or negative in tone, news from friends, or information from pages they follow. We carefully consider what research we do and have a strong internal review process. There is no unnecessary collection of people's data in connection with these research initiatives and all data is stored securely.
And on Sunday afternoon, Adam D.I. Kramer, one of the study's authors and a Facebook employee, commented on the experiment in a public Facebook post. "And at the end of the day, the actual impact on people in the experiment was the minimal amount to statistically detect it," he writes. "Having written and designed this experiment myself, I can tell you that our goal was never to upset anyone. [...] In hindsight, the research benefits of the paper may not have justified all of this anxiety."
Kramer adds that Facebook's internal review practices have "come a long way" since 2012, when the experiment was run.
What did the paper itself find?
The study found that by manipulating the News Feeds displayed to 689,003 Facebook users users, it could affect the content which those users posted to Facebook. More negative News Feeds led to more negative status messages, as more positive News Feeds led to positive statuses.
As far as the study was concerned, this meant that it had shown "that emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness." It touts that this emotional contagion can be achieved without "direct interaction between people" (because the unwitting subjects were only seeing each others' News Feeds).
The researchers add that never during the experiment could they read individual users' posts.
Two interesting things stuck out to me in the study.
The first? The effect the study documents is very small, as little as one-tenth of a percent of an observed change. That doesn't mean it's unimportant, though, as the authors add:
Given the massive scale of social networks such as Facebook, even small effects can have large aggregated consequences. [...] After all, an effect size of d = 0.001 at Facebook's scale is not negligible: In early 2013, this would have corresponded to hundreds of thousands of emotion expressions in status updates per day.
The second was this line:
Omitting emotional content reduced the amount of words the person subsequently produced, both when positivity was reduced (z = ?4.78, P < 0.001) and when negativity was reduced (z = ?7.219, P < 0.001).
In other words, when researchers reduced the appearance of either positive or negative sentiments in people's News Feeds—when the feeds just got generally less emotional—those people stopped writing so many words on Facebook.
Make people's feeds blander and they stop typing things into Facebook.
Was the study well designed?
Perhaps not, says John Grohol, the founder of psychology website Psych Central. Grohol believes the study's methods are hampered by the misuse of tools: Software better matched to analyze novels and essays, he says, is being applied toward the much shorter texts on social networks.
Let's look at two hypothetical examples of why this is important. Here are two sample tweets (or status updates) that are not uncommon:
•"I am not happy.
•"I am not having a great day."
An independent rater or judge would rate these two tweets as negative — they're clearly expressing a negative emotion. That would be +2 on the negative scale, and 0 on the positive scale.
But the LIWC 2007 tool doesn't see it that way. Instead, it would rate these two tweets as scoring +2 for positive (because of the words "great" and "happy") and +2 for negative (because of the word "not" in both texts).
"What the Facebook researchers clearly show," writes Grohol, "is that they put too much faith in the tools they're using without understanding — and discussing — the tools' significant limitations."
Did an institutional review board (IRB)—an independent ethics committee that vets research that involves humans—approve the experiment?
According to a Cornell University press statement on Monday, the experiment was conducted before an IRB was consulted.* Cornell professor Jeffrey Hancock—an author of the study—began working on the results after Facebook had conducted the experiment. Hancock only had access to results, says the release, so "Cornell University's Institutional Review Board concluded that he was not directly engaged in human research and that no review by the Cornell Human Research Protection Program was required."
In other words, the experiment had already been run, so its human subjects were beyond protecting. Assuming the researchers did not see users' confidential data, the results of the experiment could be examined without further endangering any subjects.
Both Cornell and Facebook have been reluctant to provide details about the process beyond their respective prepared statments. One of the study's authors told The Atlantic on Monday that he's been advised by the university not to speak to reporters.
By the time the study reached Susan Fiske, the Princeton University psychology professor who edited the study for publication, Cornell's IRB members had already determined it outside of their purview.
Fiske had earlier conveyed to The Atlantic that the experiment was IRB-approved.
"I was concerned," Fiske told The Atlantic on Saturday, "until I queried the authors and they said their local institutional review board had approved it—and apparently on the grounds that Facebook apparently manipulates people's News Feeds all the time."
On Sunday, other reports raised questions about how an IRB was consulted. In a Facebook post on Sunday, study author Adam Kramer referenced only "internal review practices." And a Forbes report that day, citing an unnamed source, claimed that Facebook only used an internal review.
When The Atlantic asked Fiske to clarify Sunday, she said the researchers' "revision letter said they had Cornell IRB approval as a 'pre-existing dataset' presumably from FB, who seems to have reviewed it as well in some unspecified way... Under IRB regulations, pre-existing dataset would have been approved previously and someone is just analyzing data already collected, often by someone else."
The mention of a "pre-existing dataset" here matters because, as Fiske explained in a follow-up email, "presumably the data already existed when they applied to Cornell IRB." (She also noted: "I am not second-guessing the decision.") Cornell's Monday statement confirms this presumption.
On Saturday, Fiske said that she didn't want the "the originality of the research" to be lost, but called the experiment "an open ethical question."
"It's ethically okay from the regulations perspective, but ethics are kind of social decisions. There's not an absolute answer. And so the level of outrage that appears to be happening suggests that maybe it shouldn't have been done...I'm still thinking about it and I'm a little creeped out, too."
For more, check Atlantic editor Adrienne LaFrance's full interview with Prof. Fiske.
From what we know now, were the experiment's subjects able to provide informed consent?
In its ethical principles and code of conduct, the American Psychological Association (APA) defines informed consent like this:
When psychologists conduct research or provide assessment, therapy, counseling, or consulting services in person or via electronic transmission or other forms of communication, they obtain the informed consent of the individual or individuals using language that is reasonably understandable to that person or persons except when conducting such activities without consent is mandated by law or governmental regulation or as otherwise provided in this Ethics Code.
As mentioned above, the research seems to have been carried out under Facebook's extensive terms of service. The company's current data use policy, which governs exactly how it may use users' data, runs to more than 9,000 words and uses the word "research" twice. But as Forbes writer Kashmir Hill reported Monday night, the data use policy in effect when the experiment was conducted never mentioned "research" at all—the word wasn't inserted until May 2012.
Never mind whether the current data use policy constitutes "language that is reasonably understandable": Under the January 2012 terms of service, did Facebook secure even shaky consent?
The APA has further guidelines for so-called "deceptive research" like this, where the real purpose of the research can't be made available to participants during research. The last of these guidelines is:
Psychologists explain any deception that is an integral feature of the design and conduct of an experiment to participants as early as is feasible, preferably at the conclusion of their participation, but no later than at the conclusion of the data collection, and permit participants to withdraw their data.
At the end of the experiment, did Facebook tell the user-subjects that their News Feeds had been altered for the sake of research? If so, the study never mentions it.
James Grimmelmann, a law professor at the University of Maryland, believes the study did not secure informed consent. And he adds that Facebook fails even its own standards, which are lower than that of the academy:
A stronger reason is that even when Facebook manipulates our News Feeds to sell us things, it is supposed—legally and ethically—to meet certain minimal standards. Anything on Facebook that is actually an ad is labelled as such (even if not always clearly.) This study failed even that test, and for a particularly unappealing research goal: We wanted to see if we could make you feel bad without you noticing. We succeeded.
Did the U.S. government sponsor the research?
Cornell has now updated their June 10 story to say that the research received no external funding. Originally, Cornell had identified the Army Research Office, an agency within the U.S. Army that funds basic research in the military's interest, as one of the funders of their experiment.
Do these kind of News Feed tweaks happen at other times?
At any one time, Facebook said last year, there were on average 1,500 pieces of content that could show up in your News Feed. The company uses an algorithm to determine what to display and what to hide.
It talks about this algorithm very rarely, but we know it's very powerful. Last year, the company changed News Feed to surface more news stories. Websites like BuzzFeed and Upworthy proceeded to see record-busting numbers of visitors.
So we know it happens. Consider Fiske's explanation of the research ethics here—the study was approved "on the grounds that Facebook apparently manipulates people's News Feeds all the time." And consider also that from this study alone Facebook knows at least one knob to tweak to get users to post more words on Facebook.
* This post originally stated that an institutional review board, or IRB, was consulted before the experiment took place regarding certain aspects of data collection.
Adrienne LaFrance contributed writing and reporting.
...............................................
http://www.washingtonpost.com/blogs/the-switch/wp/2014/10/02/facebook-changes-its-research-rules-after-mood-study-backlash/
Facebook changes its research rules after mood study backlash
By Hayley Tsukayama and Brian Fung October 2
Facebook is changing the way it conducts research using its site, following a summer uproar over a mood study that the social network's researchers published in the Proceedings of the National Academy of Sciences.
For the study, researchers altered nearly 700,000 users' News Feeds to show either only happy or sad posts from friends, and found that the tone of friends' posts had a corresponding effect on Facebook users' moods. Once the article was published, many Facebook users complained that the social network had no right to manipulate their feelings -- and certainly not without explicitly informing Facebook users that they were part of a study.
Facebook chief technical officer Mike Schropefer said in a blog post Thursday that the company was caught off guard by reaction to the study
rest at link
...............................
at quick search using Facebook 'Mood Altering' Study
will get even more stuff to read
sigh
:(
and hey sign up for this.. outta be interesting, huh?
Facebook Reportedly Working On Healthcare Features And Apps
Reuters
Posted: 10/03/2014 7:59 am EDT Updated: 10/03/2014 7:59 am EDT
http://www.huffingtonpost.com/2014/10/03/facebook-healthcare_n_5926140.html?utm_hp_ref=technology
oh yeah he got it
http://www.youtube.com/watch?v=9s0ukQGLXQ4
http://www.huffingtonpost.com/2014/10/06/facebook-messenger_n_5940362.html?utm_hp_ref=technology
Facebook Possibly Planning A Secret New Use For Messenger, Leaked Screenshots Reveal
The Huffington Post | By Alexis Kleinman
Posted: 10/06/2014 2:19 pm EDT Updated: 3 hours ago
Many people don't want to trust Facebook with their real names. Now new evidence suggests that Facebook wants people to trust them with their money.
Last summer, Facebook starting forcing users to switch to a separate messaging app, Facebook Messenger. We knew Facebook was trying to diversity and take over your phone with this move, but some leaked screenshots hint that it's going to use Messenger to create a Venmo-like service for people to pay their Facebook friends.
If Facebook's service is like the popular payment app Venmo, it will allow people to pay each other for anything they like, whether it's rent money or a cab ride. People use the free app to pay their friends by hooking Venmo up to their bank accounts.
Some leaked screenshots show what looks like a payment service. Security researcher and iOS developer Andrew Aude tweeted the following screenshots of code on Saturday:
@Facebook Messenger has P2P payments coming. @SquareCash style. pic.twitter.com/3NuXuuaMMC
— Andrew Aude (@andyplace2) October 4, 2014
Forensics researcher Jonathan Zdziarski tweeted the following on Sept. 9:
Not necessarily the best design to keep credit card details in Objective-C objects in resident memory. But meh. pic.twitter.com/aIUpovBkB1
— Jonathan Zdziarski (@JZdziarski) September 9, 2014
Facebook declined to comment to The Huffington Post.
This development shouldn't come as a huge surprise, since PayPal's president, David Marcus, moved to Facebook to lead the company's messaging products last summer. Still, it's hard to imagine people trusting Facebook with their money, since the social media company has been so sketchy with people's private data.
The last time Facebook tried to get into e-commerce (with "Facebook Gifts") it didn't go so well. Facebook discontinued this feature last July.
I find it funny that nobody speaks of the fact that Facebook lies to us when they tell us that our friends used "Friend Finder", that app that allows us to find our friends easily, we just have to give Facebook the password for our email so they can read them and see who sends us email and who we send emails.
I know they lie because it tells me my sisters used it and I asked them and both denied (and nobody in his/her right mind would give their email password to anyone).
PS: I do use Facebook, almost only for playing games. :)
yeah my sister says that's why she uses it.. for the games
but there are better ways to get games ::)
Yes, Facebook was founded, and is run by, assholes. This isn't news to me. It's something I've known for a while now.
I use it to keep in touch with family from Victoria occasionally, and a friend from Florida. Whenever the site occasionally asks me for information, I simply close the dialogue box. Not really a big deal.
They already have my IP, which they can use to extrapolate my offline location if they want. There's not really a lot I can do about that; especially considering that I use a debit card to do shopping on a semi-daily basis as well, and I'm sure accessing my transaction record is easy enough, too.
I don't use the "if you've got nothing to hide, you've got nothing to worry about," fallacy, because as an agency associated with the government, the NSA can have the laws changed, so that having something to hide or doing the wrong thing means whatever they want it to mean.
I'm just psychologically realistic about the fact that they are ruthless, relentless psychopaths, and if they really wanted to find me, they would.