Posts tagged "Facebook"

Prince Targets Facebook Users in $22m Live Concert Piracy Lawsuit

International superstar Prince is back on the copyright warpath, yet again targeting individuals who are quite possibly some of his biggest fans. In a lawsuit filed in the Northern District of California, Prince is chasing down fans who found links to his live concerts and posted them on Facebook and blogs. The unlucky 22 individuals, 20 of whom are yet to be identified, face a damages claim of $22 million.

prince1Prince Rogers Nelson is undoubtedly a great and prolific singer/songwriter, but if people want to be a fan they better pay for every last second of his music they listen to – or else.

Prince loves to file copyright infringement lawsuits and at the start of 2014 another has landed, ready to stir up a storm as the details become known and the case develops.

Filed in the United States District Court in the Northern District of California, the lawsuit targets 22 individuals, only two of which are referenced by their real names. The others remain ‘Does’ although eight are indicated by their online nicknames.

Sadly, with names such as PurpleHouse2, PurpleKissTwo and NPRUNIVERSE it’s difficult to see these people as anything other than Prince fans. But it is Doe 8 – THEULTIMATEBOOTLEGEXPERIENCE – that gives the clearest indication of what this lawsuit is all about.


“The Defendants in this case engage in massive infringement and bootlegging of Prince’s material,” the lawsuit reads.

“For example, in just one of the many takedown notices sent to Google with respect to Doe 2 (aka DaBang319), Prince identified 363 separate infringing links to file sharing services, with each link often containing copies of bootlegged performances of multiple separate musical compositions.”

While it’s clear by now that Prince doesn’t share the same opinions as the Grateful Dead or Nine Inch Nails on bootlegs, for once a file-sharing site isn’t in the cross hairs. The lawsuit says that the defendants used Facebook and Google’s Blogger “to accomplish their unlawful activity”, either by running fanpages or blogs and linking to live concert recordings without permission.

The complaint lists several pieces of audio offered by the defendants, concluding Prince performances from 2011 in North Carolina, 2002 in Oakland and 1983 in Chicago. Apparently even the circulation of a 31-year-old live set damages Prince’s earning capability, with the singer leveling charges of direct copyright infringement, ‘unauthorized fixation and trafficking in sound recordings’, contributory copyright infringement and bootlegging.

“Prince has suffered and is continuing to suffer damages in an amount according to proof, but no less than $1 million per Defendant,” the lawsuit reads.

Prince has a long tradition of suing anyone who dares to use his material without permission, but doesn’t always carry through on his threats. A 2007 effort to sue The Pirate Bay went nowhere. This new lawsuit is likely to go much further.

Update Jan 28: Without giving any reason, Prince has now dropped the lawsuit. The dismissal was without prejudice so could be raised again in the future

Enhanced by Zemanta

Viewed 788552 times by 29339 viewers

Be the first to comment - What do you think?
Posted by plates55 - February 5, 2014 at 1:59 pm

Categories: Uncategorized   Tags: , , , , , , ,

Facebook and Microsoft partner on hackathon

On Monday, Microsoft and Facebook announced they will partner to host a hackathon at the Facebook offices in Menlo Park, Calif.

“A number of our engineers, along with Facebook’s engineers, will be there working with everyone to show how you can easily get deep integration across Windows and Windows Phone apps,” writes Steve Guggenheimer in a blog post. “We’ll be offering 1:1 assistance along with some great speakers that will help jumpstart any app efforts you have.”

In his blog post, Guggenheimer broke down a few key buckets that he hopes will get developers excited about what you can do with this event:

· Deliver Unique Consumer Experiences: Through the Facebook Login API for Windows and Windows Phone, developers can create unique consumer experiences due to the seamless integration with Microsoft products that consumers already use every day (i.e. Bing, Outlook and others.)

· Provide developers with scale and world-class tools: Shared code means developers spend less time coding and more time making apps interesting and easy to use. The common core across the Windows platform helps developers scale their resources quickly to build Facebook-connected apps across multiple devices though reusable codes, libraries and other helpful open-sourced tools.

· Policy. Evolving macro topics to improve tech and economy:  Since 2007, Microsoft and Facebook have partnered to evolve both technology and the macroeconomics that impact technical employees and consumers. This latest developer toolkit is just one example of the way Microsoft and Facebook continue to help developers of all skillsets be successful on the platforms.

Enhanced by Zemanta

Viewed 42848 times by 9663 viewers

Be the first to comment - What do you think?
Posted by plates55 - January 6, 2014 at 3:28 pm

Categories: Microsoft   Tags: , , , , , , ,

Boot Speed Test: iPhone 5 vs. iPhone 5c vs. iPhone 5s [Video]


Check out this video that compares the boot speed of the iPhone 5 vs. iPhone 5c vs. iPhone 5s.

All three iPhones were freshly restored to iOS 7.0 with absolutely no data or accounts set up on the device.

The counter was started from the exact moment each device received USB signal and stopped as soon as the lockscreen was loaded.

Take a look below and please follow iClarified on Twitter, Facebook, Google+, or RSS for more news, tutorials, and videos!


Enhanced by Zemanta

Viewed 26730 times by 5854 viewers

Be the first to comment - What do you think?
Posted by plates55 - September 22, 2013 at 10:38 am

Categories: Uncategorized   Tags: , , , , , , ,

Microsoft Message Analyzer Beta

Message Analyzer is the successor of Network Monitor but does much more than a network sniffer or packet tracing tool.

Key capabilities include:

  • Integrated “live” event and message capture at various system levels and endpoints
  • Parsing and validationof protocol messages and sequences
  • Automatic parsing of event messages described by ETW manifests
  • Summarized grid display – top level is “operations”, (requests matched with responses)
  • User controlled “on the fly” grouping by message attributes
  • Ability to browse for logs of different types (.cap, .etl, .txt) and import them together
  • Automatic re-assembly and ability to render payloads
  • Ability to import text logs, parsing them into key element/value pairs
  • Support for “Trace Scenarios” (one or more message providers, filters, and views)


Microsoft has released a beta and is working to a drive towards a mid-2013 RTM.

There is also a new blog here:

(To capture at the NDIS and Firewall layers without running as admin, you must log off and back on after installation to pick up the necessary credentials. )

Sign up for the beta:

Enhanced by Zemanta

Viewed 68019 times by 7388 viewers

Be the first to comment - What do you think?
Posted by plates55 - September 24, 2012 at 6:41 pm

Categories: Microsoft   Tags: , , , , , , ,

Is ABC Starting to Understand BitTorrent Demand?

Interesting news coming out of the Australian Broadcasting Corporation (ABC) shows that maybe BitTorrent pirates have a point when it comes to not waiting for TV shows. In an attempt to dissuade Aussie punters from torrenting the show, ABC has announced it will offer this weekend’s new Doctor Who episode on its iView service as soon as it finishes airing in the UK.

TV shows are often the most popular torrents out there, and the resurrected sci-fi series Doctor Who has an ardent following. Since it rematerialized onto our screens in 2005 it has rapidly gained a substantial and ‘hard-core’ following world-wide.

But thanks to Twitter and Facebook, as well as the more old-fashioned forums and email lists, a storyline can be ruined by ‘spoilers’ emanating from those in regions who gain access to the show first – a recurring theme of the BBC show over the last years.

It’s about time someone started paying attention to the concerns of fans – something we pointed out back in 2008 – so ABC’s decision to place the show on its iView service is strongly welcomed.

“Piracy is wrong, as you are denying someone their rights and income for their intellectual property,” said ABC1 controller Brendan Dahill.

“The fact that it is happening is indicative that as broadcasters we are not meeting demand for a segment of the population. So as broadcasters we need to find convenient ways of making programs available via legal means to discourage the need for piracy,” he added.

The Dr Who show will be available on the iView ‘catch up’ service moments after the episode finishes airing in the UK, although those who prefer to watch on their TV will still have to wait until September 8th. While fans would prefer it aired sometime on the Sunday, it’s certainly a step in the right direction.

This is not the first time a sonic screwdriver has been pointed at a broadcast schedule. Transatlantic Whovians got a taste of same-day showings last spring, and will do so again this year. For others however, FACT’s actions against the expat-focused site UKNova over the weekend could not have come at a worse time.

Enhanced by Zemanta

Viewed 41395 times by 8337 viewers

Be the first to comment - What do you think?
Posted by plates55 - August 29, 2012 at 7:09 am

Categories: Bittorrent   Tags: , , , , , , ,

Infographic: Music, movie & book biz bigger than ever

Is piracy really destroying the entertainment industry? Techdirt blogger Mike Masnick doesn’t think so, and he has some numbers to prove it. Masnick and his Floor64 colleague Michael Ho released a report titled “The Sky Is Rising” at the Midem music industry convention in Cannes Monday that shows how the global entertainment industry actually grew by 50 percent in the last decate, despite Napster, BitTorrent & Co.

The report was commissioned by the Computer & Communications Industry Association, which counts companies like Google and Facebook as its members. It’s definitely worth a read (check out the full PDF) and will likely provoke lots of discussion, especially in light of the entertainment industry’s ongoing push for tougher copyright laws.



Enhanced by Zemanta

Viewed 46100 times by 8306 viewers

Be the first to comment - What do you think?
Posted by plates55 - January 30, 2012 at 7:38 am

Categories: Bittorrent   Tags: , , , , , , , ,

MIT Genius Stuffs 100 Processors Into Single Chip

WESTBOROUGH, Massachusetts — Call Anant Agarwal’s work crazy, and you’ve made him a happy man.

Agarwal directs the Massachusetts Institute of Technology’s vaunted Computer Science and Artificial Intelligence Laboratory, or CSAIL. The lab is housed in the university’s Stata Center, a Dr. Seussian hodgepodge of forms and angles that nicely reflects the unhindered-by-reality visionary research that goes on inside.

Agarwal and his colleagues are figuring out how to build the computer chips of the future, looking a decade or two down the road. The aim is to do research that most people think is nuts. “If people say you’re not crazy,” Agarwal tells Wired, “that means you’re not thinking far out enough.”

Agarwal has been at this a while, and periodically, when some of his pie-in-the-sky research becomes merely cutting-edge, he dons his serial entrepreneur hat and launches the technology into the world. His latest commercial venture is Tilera. The company’s specialty is squeezing cores onto chips — lots of cores. A core is a processor, the part of a computer chip that runs software and crunches data. Today’s high-end computer chips have as many as 16 cores. But Tilera’s top-of-the-line chip has 100.

The idea is to make servers more efficient. If you pack lots of simple cores onto a single chip, you’re not only saving power. You’re shortening the distance between cores.


Today, Tilera sells chips with 16, 32, and 64 cores, and it’s scheduled to ship that 100-core monster later this year. Tilera provides these chips to Quanta, the huge Taiwanese original design manufacturer (ODM) that supplies servers to Facebook and — according to reports, Google. Quanta servers sold to the big web companies don’t yet include Tilera chips, as far as anyone is admitting. But the chips are on some of the companies’ radar screens.

Agarwal’s outfit is part of an ever growing movement to reinvent the server for the internet age. Facebook and Google are now designing their own servers for their sweeping online operations. Startups such as SeaMicro are cramming hundreds of mobile processors into servers in an effort to save power in the web data center. And Tilera is tackling this same task from different angle, cramming the processors into a single chip.

Tilera grew out of a DARPA- and NSF-funded MIT project called RAW, which produced a prototype 16-core chip in 2002. The key idea was to combine a processor with a communications switch. Agarwal calls this creation a tile, and he’s able to build these many tiles into a piece of silicon, creating what’s known as a “mesh network.”

“Before that you had the concept of a bunch of processors hanging off of a bus, and a bus tends to be a real bottleneck,” Agarwal says. “With a mesh, every processor gets a switch and they all talk to each other…. You can think of it as a peer-to-peer network.”

What’s more, Tilera made a critical improvement to the cache memory that’s part of each core. Agarwal and company made the cache dynamic, so that every core has a consistent copy of the chip’s data. This Dynamic Distributed Cache makes the cores act like a single chip so they can run standard software. The processors run the Linux operating system and programs written in C++, and a large chunk of Tilera’s commercialization effort focused on programming tools, including compilers that let programmers recompile existing programs to run on Tilera processors.

The end result is a 64-core chip that handles more transactions and consumes less power than an equivalent batch of x86 chips. A 400-watt Tilera server can replace eight x86 servers that together draw 2,000 watts. Facebook’s engineers have given the chip a thorough tire-kicking, and Tilera says it has a growing business selling its chips to networking and videoconferencing equipment makers. Tilera isn’t naming names, but claims one of the top two videoconferencing companies and one of the top two firewall companies.

An Army of Wimps

There’s a running debate in the server world over what are called wimpy nodes. Startups SeaMicro and Calxeda are carving out a niche for low-power servers based on processors originally built for cellphones and tablets. Carnegie Mellon professor Dave Andersen calls these chips “wimpy.” The idea is that building servers with more but lower-power processors yields better performance for each watt of power. But some have downplayed the idea, pointing out that it only works for certain types of applications.

Tilera takes the position that wimpy cores are okay, but wimpy nodes — aka wimpy chips — are not.

Keeping the individual cores wimpy is a plus because a wimpy core is low power. But if your cores are spread across hundreds of chips, Agarwal says, you run into problems: inter-chip communications are less efficient than on-chip communications. Tilera gets the best of both worlds by using wimpy cores but putting many cores on a chip. But it still has a ways to go.

There’s also a limit to how wimpy your cores can be. Google’s infrastructure guru, Urs Hölzle, published an influential paper on the subject in 2010. He argued that in most cases brawny cores beat wimpy cores. To be effective, he argued, wimpy cores need to be no less than half the power of higher-end x86 cores.

Tilera is boosting the performance of its cores. The company’s most recent generation of data center server chips, released in June, are 64-bit processors that run at 1.2 to 1.5 GHz. The company also doubled DRAM speed and quadrupled the amount of cache per core. “It’s clear that cores have to get beefier,” Agarwal says.

The whole debate, however, is somewhat academic. “At the end of the day, the customer doesn’t care whether you’re a wimpy core or a big core,” Agarwal says. “They care about performance, and they care about performance per watt, and they care about total cost of ownership, TCO.”

Tilera’s performance per watt claims were validated by a paper published by Facebook engineers in July. The paper compared Tilera’s second generation 64-core processor to Intel’s Xeon and AMD’s Opteron high end server processors. Facebook put the processors through their paces on Memcached, a high-performance database memory system for web applications.

According to the Facebook engineers, a tuned version of Memcached on the 64-core Tilera TILEPro64 yielded at least 67 percent higher throughput than low-power x86 servers. Taking power and node integration into account as well, a TILEPro64-based S2Q server with 8 processors handled at least three times as many transactions per second per Watt as the x86-based servers.

Despite the glowing words, Facebook hasn’t thrown its arms around Tilera. The stumbling block, cited in the paper, is the limited amount of memory the Tilera processors support. Thirty-two-bit cores can only address about 4GB of memory. “A 32-bit architecture is a nonstarter for the cloud space,” Agarwal says.

Tilera’s 64-bit processors change the picture. These chips support as much as a terabyte of memory. Whether the improvement is enough to seal the deal with Facebook, Agarwal wouldn’t say. “We have a good relationship,” he says with a smile.

While Intel Lurks

Intel is also working on many-core chips, and it expects to ship a specialized 50-core processor, dubbed Knights Corner, in the next year or so as an accelerator for supercomputers. Unlike the Tilera processors, Knights Corner is optimized for floating point operations, which means it’s designed to crunch the large numbers typical of high-performance computing applications.

In 2009, Intel announced an experimental 48-core processor code-named Rock Creek and officially labeled the Single-chip Cloud Computer (SCC). The chip giant has since backed off of some of the loftier claims it was making for many-core processors, and it focused its many-core efforts on high-performance computing. For now, Intel is sticking with the Xeon processor for high-end data center server products.

Dave Hill, who handles server product marketing for Intel, takes exception to the Facebook paper. “Really what they compared was a very optimized set of software running on Tilera versus the standard image that you get from the open source running on the x86 platforms,” he says.

The Facebook engineers ran over a hundred different permutations in terms of the number of cores allocated to the Linux stack, the networking stack and the Memcached stack, Hill says. “They really kinda fine tuned it. If you optimize the x86 version, then the paper probably would have been more apples to apples.”

Tilera’s roadmap calls for its next generation of processors, code-named Stratton, to be released in 2013. The product line will expand the number of processors in both directions, down to as few as four and up to as many as 200 cores. The company is going from a 40-nm to a 28-nm process, meaning they’re able to cram more circuits in a given area. The chip will have improvements to interfaces, memory, I/O and instruction set, and will have more cache memory.

But Agarwal isn’t stopping there. As Tilera churns out the 100-core chip, he’s leading a new MIT effort dubbed the Angstrom project. It’s one of four DARPA-funded efforts aimed at building exascale supercomputers. In short, it’s aiming for a chip with 1,000 cores.

Enhanced by Zemanta

Viewed 35070 times by 7614 viewers

1 comment - What do you think?
Posted by plates55 - January 23, 2012 at 2:38 pm

Categories: Gadget   Tags: , , , , , , ,

Google Hopes to Make Friends with a More Social Search

Appearing atop Google’s search results used to be the exclusive right of Web celebrities and Fortune 500 companies. Starting this week, your mom is just as likely to show up at the top of those results—providing she uses Google’s still fledgling social network, Google+.

The change represents a fundamental shift, as Google’s algorithm-driven search is going through a social overhaul as it attempts to head off the threat of disruption from socially focused companies, such as Facebook and LinkedIn. The new Google service, called “Search, plus Your World,” is part of that effort.

Over the next few days, Google will start adding information that has been shared publicly and privately on Google+ to its search results.

This means you might see a picture of a friend’s dog when searching for Pomeranians, or a restaurant recommended by a friend when you search for nearby eateries. Even if you aren’t a Google+ user, Google search results will show content posted publicly on the social network that it judges to be relevant—profile pages and pages dedicated to particular topics.

The goal, says Google fellow Ben Smith, is to deliver more personally relevant results. “We’re interested in making Google search as good as we can,” says Smith. “But we need to know who your friends are and what your connections are. Google+ provides a great way of managing your connections and your friends and lets you make your search results better.”

The only problem is, until more people start using Google+, these search results will include just a small fraction of the social information available online. The rest exists in unsearchable silos owned by Facebook, LinkedIn, and other smaller social media companies. Facebook presents a particular problem for Google because the vast amounts of personal information that its users post can be turned into powerful ways of filtering information and finding recommendations (see “Social Indexing” for more on this effort).

“Over the past several years, people have been benefiting from a growing diversity in the channels they use to receive information,” says Jon Kleinberg, a professor at Cornell University who researches the way information spreads online. “During this time, a major axis along which our information channels have developed is the social one.”

In June 2011, Google launched a way for users to recommend web pages by hitting a “+1” button next to a search result. These buttons can also be added to Web pages, where recommendations will feed back into search results. The approach is similar to Facebook’s “Like” button.


In April 2011, Google launched Google+ as a direct competitor to Facebook. The site won compliments for some of its features, like the ability to put contacts into different “circles” so that information is shared in a more controlled way. But, after rapid early uptake, Google has struggled to capture market share from Facebook, and has around 60 million active users, compared to Facebook’s more than 800 million.

The new features may not only make Google search more useful, but also encourage greater use of Google+. Showing Google+ profile pages and topic pages prominently could encourage people to create their own profile and topic pages.

The new service pulls in social information only from Google+ to start with, but, Smith says, it could include other, non-Google sources in the future.

Google is working hard to make its most popular services more social. Whereas an algorithmic approach to finding and sorting online information was once a source of nerdy pride for the company because of its objectivity, Google is fast reinventing itself as a business that values the suggestions of its users and their friends.

How people will come to use social signals to find useful information isn’t yet clear, though. “The most natural mode of use is still fairly up in the air, and it will be fascinating to see how people’s online behavior evolves in this dimension over the next few years,” Kleinberg says.

Enhanced by Zemanta

Viewed 38453 times by 7106 viewers

Be the first to comment - What do you think?
Posted by plates55 - January 10, 2012 at 8:00 pm

Categories: Google   Tags: , , , , , , , , ,

With Search+, Google Fires Another Shot At Facebook

LAS VEGAS — If last year’s launch of Google+ was the search giant’s first shot in the social wars, consider the new Search+ product its Blitzkrieg.

Launched Tuesday, Google’s new Search+ initiative integrates results culled from your Google+ social network connections into Google search queries, a major step into providing relevant social content into the company’s namesake product.

When you search for a term — say, “Netflix,” for example — the new product will serve up private and public instances of “Netflix” pulled from people you’re connected with on Google+, including photos, links and status updates. In addition, relevant Google+ profiles, personalities and brand pages will also be folded into results.

So a search for Netflix could yield the official site, a news story about the company, a link to a friend from Google+ talking about Netflx, and the like. Further, all of these results are tailored specifically to those friends in your network, so each person’s results will be personalized and completely different.

See also: ‘Has Google Popped the Filter Bubble?‘ By Steven Levy

It’s a huge move for Google, a company which made its bilions indexing web pages with its advanced algorithms. The company’s origins are rooted in text-based search, using Larry Page’s now-famous “Page Rank” system to create a hierarchy of relevancy for when users entered search queries. Over the years, search progressed: Google added video, images, its Instant product, and the like. The early Oughts gave rise to an age of search, so much so that “Googling” was deemed a verb in our official English lexicon.

But as the decade progressed, another phenomenon began to take over — social. Facebook grew from a small site created in Mark Zuckerberg’s Harvard dorm room to a global presence, now boasting over 800 million users. Twitter sees millions of tweets pass through its pipes monthly. Social network LinkedIn is one of the most watched companies in the Valley. And social gaming giant Zynga just filed a multi-billion dollar IPO in December.

And as users flocked to the platform, a different kind of search evolved. It was a search based on items which users didn’t even know they wanted. Facebook begat “likes,” a way of notifying others that you like (or are at the very least interested in) something. ‘Likes’ spread fast, and liking became another way to find new and relevant content from friends.

And as Facebook widened its reach over time, Google fell further and further behind.

“One of the signals that we haven’t take as much advantage of as we should have is that all of [our search results] were written by people,” said Jack Menzel, director of search product management, in an interview at the Consumer Electronics Show (CES). “And you, the searcher, are a unique person, looking for info specifically relevant to you.”

So the introduction of Google’s new Search+ additions ultimately serve a twofold purpose: First, Google is using the strength of its insanely popular search product to bolster its fledgling social network. As of today, Google+ has a user base somewhere in the tens of millions — far behind that of Facebook. Considering the millions upon millions of search queries entered every single day, and the implications of folding Google+ information into those results, it’s a easy way to leverage the power of Google’s existing properties into beefing up its young one.

Second, it provides Google with an entire cache of new information relevance. Google and Facebook made headlines last year after Google alluded to issues with indexing Facebook users’ individual profile data for Google’s search results. In vague terms, Google search seemed limited in how much Facebook data it was privy to. And in an age where social sharing has grown far more relevant than ever before, that’s a huge chunk of pertinent information.

So Google has decided to go within for that data. User posts and data can now be searched for relevant content, and served up to individuals. While it’s nowhere near as extensive as Facebook’s treasure trove of personal data, it’s a fine start for Google’s push into social.

The new products could, however, yield a number of problems for Google. For instance, if a user searches for a recent New York Times article using Google and search results yield both the article itself and a post from a Google+ friend who shared the article, the user may click on the friend’s shared result, possibly read the headline and not end up going to the publisher’s site, instead sticking inside of the Google+ environment. That means fewer clicks for The New York Times, and few ad dollars in the long run.

Further, Google has never had much luck in the realm of privacy, and adding personal results to search queries could cause user upheaval. Privacy scares and Google aren’t strangers.

But Google insists these features aren’t going to be invasive. “With your permission, and knowing about who your friends are, we can provide more tailored recommendations, and search quality will be better for consumers,” Google Chairman Eric Schmidt told reporters last fall.

The company has built a number of safeguards into the product itself to appease privacy wonks as well. First, by default all searches will be secured by SSL encryption, protecting from others trying to peep your queries. Second, it’s all opt-in. There’s a little Search+ toggle button available on the page, so you can turn it on or off depending on if you want the personal results to appear. And finally, you can completely turn it off if you don’t want the new features integrated into your existing Google searches.

In all, it’s Google’s answer to recent developments in Facebook’s expanding universe. As Facebook opened up its graph to integrate better with application developers last year, huge services and publishers have flocked to the platform, and sharing has grown exponentially. If Google has classically wielded ‘search’ as its weapon, Facebook’s ‘sharing’ was its own tool of destruction.

But with Google’s new products, social search aims to become a stronger tool, integrating Google’s past strengths with what looks to be a very social future

Enhanced by Zemanta

Viewed 37550 times by 7459 viewers

Be the first to comment - What do you think?
Posted by plates55 - January 10, 2012 at 12:14 pm

Categories: Google   Tags: , , , , , , ,

The Law of Online Sharing

Facebook‘s Mark Zuckerberg will eventually have to deal with the fact that all growth has limits.


The idea of limitless growth gives sleepless nights to environmentalists, but not to Facebook founder Mark Zuckerberg. He espouses a law of social sharing, which predicts that every year, for the foreseeable future, the amount of information you share on the Web will double.

That rule of thumb can be visualized mathematically as a rapidly growing exponential curve. More simply, our online social lives are set to get significantly busier. As for Facebook, more personal data means better ad targeting. If things work out, Zuckerberg’s net worth will follow a similar trajectory to that described in his law of social sharing.

That law is said to be mathematically derived from data inside Facebook. In ambition, it is closely modeled on Moore’s Law, which was conceived by the computer-processor pioneer Gordon Moore in 1965 and has been at work in every advance in computing since. Also an exponential curve, it states that every two years twice as many transistors can be fitted onto a chip of any given area for the same price, allowing processing power to get cheaper and more capable.

There’s a hint of vanity in Zuckerberg’s attempt to ape Moore. But it makes sense to try to describe the mechanisms that have raised Facebook and other social-Web companies to power. The Web defines our time and is being rapidly reshaped by social content—from dumb viral videos to earnest pleas on serious issues. Facebook’s success has left older companies like Google scrambling to add social features to their own products (see Q&A, November­/December 2011). Zuckerberg’s Law can help us understand such a sudden change of tack from a seemingly dominant company, just as Moore’s Law has long been used to plan and explain new strategies and technologies.

Inasmuch as Facebook is the company most invested in ­Zuckerberg’s Law, its every move can be understood as an effort to sustain the graceful upward curve of its founder’s formula. The short-term prospects look good for Zuckerberg. The original Moore’s Law is on his side; faster, cheaper computers and mobile devices have made sharing easier and allowed us to do it wherever we go. Just as important, we are willing to play along, embracing new features from Facebook and others that lead us to share things today that we wouldn’t or couldn’t have yesterday.

Facebook’s most recent major product launch, last September, is clearly aimed at validating Zuckerberg’s prophecy and may provide its first real test. An upgrade to the Open Graph platform that unleashed the now ubiquitous Like button onto the Web (see “You Are the Ad,” May/June 2011), it added a feature that allows apps and Web sites to automatically share your activity via Facebook as you go about your business. Users must first give a service permission to share automatically on their behalf. After that, frictionless sharing, as it has become known, makes sharing happen without your needing to click a Like button, or to even think about sharing. The most prominent early implementation was the music-streaming service Spotify, which can now automatically post on Facebook the details of every song you listen to. In the first two months of frictionless sharing, more than 1.5 billion “listens” were shared through Spotify and other music apps. News organizations like the Washington Post use the feature, making it possible for them to share every article a person reads on their sites or in a dedicated app. Frictionless sharing is also helping Facebook drag formerly offline activities onto the Web. An app for runners can now automatically post the time, distance, and path of a person’s morning run.

Frictionless sharing sustains ­Zuckerberg’s Law by automating what used to be a manual task, thus removing a brake on the rate at which we can share. It also shows that we are willing to compromise our previous positions on how much sharing is too much. Facebook introduced a form of automatic sharing four years ago with a feature called Beacon, but it retreated after a strong backlash from users. Beacon automatically shared purchases that Facebook members made through affiliated online retailers, such as eBay. Frictionless sharing reintroduces the same basic model with the difference that it is opt-in rather than opt-out. Carl ­Sjogreen, a computer scientist who is a product director overseeing Open Graph, says it hasn’t elicited anything like the rage that met Beacon’s debut. “Everyone has a different idea of what they want to share, and what they want to see,” says Sjogreen. Moreover, judging by the number of Spotify updates from my Facebook friends, frictionless sharing is pretty popular.

Privacy concerns will surely arise again as Facebook and others become able to ingest and process more of our personal data. Yet our urge to share always seems to win out. The potential for GPS-equipped cell phones to become location trackers, should the government demand access to our data, has long concerned some people. A South Park episode last year even portrayed an evil caricature of Apple boss Steve Jobs standing before a wall-sized map labeled “Where Everybody in the World Is Right Now.” Six months later, to a mostly positive reception, Apple debuted a new iPhone feature called Find My Friends, which encourages users to let Apple track their location and share it.

It’s not hard to explain why we seem eager to do our bit to maintain the march of Zuckerberg’s Law. Social sites are like Skinner boxes: we press the Like button and are rewarded with attention and interaction from our friends. It doesn’t take long to get conditioned to that reward. Frictionless sharing can now push the lever for us day and night, in hopes of drawing even more attention from others.

Unfortunately for Zuckerberg and his law, not every part of that feedback loop can be so easily boosted. Frictionless sharing helps, but getting others to care is the bigger challenge. In 2009 a new social site called Blippy was launched; it connected with your credit card to create a Twitter-style online feed of everything you bought. That stream could be made public or shared with particular contacts. Blippy got a lot of press but not the wide adoption its cofounder Philip Kaplan had hoped for. “Most people thought Blippy’s biggest challenge would be getting users to share their purchases,” he says. “Turns out the hard part was getting users to look at other people’s purchases. Getting people to share is a small hump. Getting them to obsess over the data—making it fun, interesting, or useful—is the big hump.”

Sjogreen has that problem in his sights. He says he is working on ways to turn the impending flood of daily trivialities coming from frictionless sharing into something fun, interesting, and useful. Repackaging the raw information to make it more compelling to others is one tactic. “It’s the patterns and anomalies that matter to us,” he says. For example, if you notice that a friend just watched 23 episodes of Breaking Bad in a row, you may decide you should check out that show after all. Or if he sets a new personal record on his morning run, the app in the phone strapped to his arm could automatically tout it to friends. Perhaps Blippy would have thrived if it highlighted significant purchases like vacations, instead of simply blasting people with everything from grocery lists to fuel bills.

We can only guess at the effectiveness of Sjogreen’s future tactics, but it is certain that they can sustain Zuckerberg’s Law for only so long. Gordon Moore put it well in 2005 when reflecting on the success of his own law: “It can’t continue forever. The nature of exponentials is that you push them out and eventually disaster happens.”

Facebook’s impending problem is that even if the company enables future pacemakers to share our every heartbeat, the company cannot automate caring—the most important part of the feedback loop that has driven the social Web’s ascent. Nothing can support exponential growth for long. No matter how cleverly our friends’ social output is summarized and highlighted for us, there are only so many hours in the day for us to express that we care. Today, the law of social sharing is a useful way to think about the rise of social computing, but eventually, reality will make it obsolete.



Enhanced by Zemanta

Viewed 25861 times by 5265 viewers

Be the first to comment - What do you think?
Posted by plates55 - January 3, 2012 at 1:03 pm

Categories: Uncategorized   Tags: , , , , , , ,

Next Page »


Get every new post delivered to your Inbox

Join other followers