For the 1964 World Fair, science fiction author Isaac Asimov wrote an article for the New York Times, envisioning what the exhibits at the event would look like in fifty years’ time. Asimov’s predictions were scrutinized and used in numerous think pieces and tech forecasts of 2014, the year that marked the passing of the five decades since the article’s publish date. Since a large body of Asimov’s work concerned itself with human relationship with artificial intelligence, much attention was focused on the following quote:
“If machines are that smart today, what may not be in the works 50 years hence? It will be such computers, much miniaturized, that will serve as the “brains” of robots.”
Most writers summarized that, while the closest we have to an android housekeeper is a Roomba, Asimov was right to draw the parallel between brains and computers. While the discussion of robotics is a better fit for other blogs, social media has seen a surge of change fueled by acquisitions of various artificial intelligence startups.
Artificial intelligence in social networks is primarily used as an efficient way to sort through large clusters of user-generated information. The term often used to describe the functions of this kind of AI is “deep learning,” which essentially means a high-level knowledge formed by analyzing and establishing patterns in large data sets (ELI5 reddit threads are a great primer into artificial intelligence topics). For social media, this means AI can help with anything from personalized product suggestions based on previous engagements, to image and voice recognition, to deep sentiment analysis.
Almost every major player in the social media arena has invested internal resources, or established third-party collaborations with teams focused on artificial intelligence. Don’t worry, we’re still quite far from SkyNet, but that last good product recommendation you got online may have been the work of an AI technology. To help you figure out what deep learning really does for major social networks, here’s the skinny on artificial intelligence in social media.
Artificial Intelligence in Social Media
Facebook’s Artificial Intelligence Research
In late 2013, a renowned New York University professor Yann LeCun made an announcement about accepting a leadership position at social network’s new initiative, an AI lab based at Facebook’s offices. In his post, LeCun also mentioned a partnership between Facebook’s new lab and NYU’s Center For Data Science in order to “to carry out research in data science, machine learning, and AI.”
LeCun’s impressive background in machine learning has set the stage for research coming out of Facebook’s Artificial Intelligence initiative. Research publications available on the dedicated section of Facebook’s Research site range from studies of neural networks learning to predict hashtags, to pattern recognition algorithms that help you tag your friends in Facebook photos. Artificial intelligence researchers at Facebook have also been working on a new set of questions for a more sophisticated version of a Turing test to help develop a Siri-like assistant for users that learns intelligent answers, instead of drawing from a pre-loaded script bank as most similar digital assistants do now.
That is all fine and exciting for AI nerds such as myself, but what value does artificial intelligence in social media offer to other Facebook users? The truth is, artificial intelligence is what helps everyday user experience on Facebook get better. Developing deep learning technologies to sort through large databases helps adjust Facebook’s suggestions, News Feed filters, figure out trending topics, and tag the appropriate friends in photos—all without spending too much manpower on data analysis. With over 800 million users logging in and generating massive amounts of data every day, advanced deep learning technology is the best way for the network to make that information work in users’ advantage.
Bonus: are you smarter than Facebook’s artificial intelligence algorithms? Test yourself in this quiz from the New Scientist.
If you’re having trouble picturing all the implications of deep learning algorithms on Facebook, then imagine that tripled for Google. Google’s acquisition of British artificial intelligence startup DeepMind in January 2014 made a splash in both tech and AI academic communities. Reportedly costing Google over $400 million to acquire, the DeepMind project now recruits a large portion of the world’s leading researchers in deep learning. Peter Norvig, Google’s director of research, perfectly summed up his company’s ace in the deck when it comes to employing AI experts in an interview with MIT Technology Review: “We said to Geoff [Hinton, one of the world’s leading deep learning researchers], ‘We like your stuff. Would you like to run models that are 100 times bigger than anyone else’s?’ That was attractive to him.”
Google’s access to data indeed makes an attractive playground for AI researchers. Deep learning technologies have the potential to vastly improve the company’s search engine, as well as contribute to concurrent robotics research. Indeed, another side of Google’s venture into artificial intelligence is the company’s investment into several leading robotics companies—which likely means that, if we were to see a robot housemaid any time soon, it’ll likely come with Google’s logo on it.
Google DeepMind recently made headlines after one of its programs beat thirty Atari games, outperforming a human player in at least one of them. According to a New Yorker article on the topic, the DeepMind team now claims that the program is a “novel artificial agent” that combines two existing forms of brain-inspired machine intelligence: a deep neural network and a reinforcement-learning algorithm. If that sentence doesn’t immediately sound revolutionary to you, let’s break it down: a program combining these two forms intelligence can not only analyze and extrapolate patterns from existing data, but also learn from these patterns in order to achieve a highly desired objective—winning the game. This new form of learning has huge implications for the future of artificial intelligence programs: faster, more efficient learning can save a lot of operational memory and accomplish tasks more efficiently.
While this is still quite a ways away from human cognitive systems, using similar algorithms in its search engine can allow Google to refine their search algorithms to make more personalized, targeted results.
LinkedIn & Bright
2014 was an exciting year for social network’s machine learning-related acquisitions. In February, LinkedIn brought a job search startup, Bright.com, into the LinkedIn fold. Bright uses machine learning algorithms to offer better job-candidate matches for both employers and job seekers. It also considers the user’s historical hiring patterns, along with account location, past work experience and synonyms in job descriptions. After performing the analysis, the program assigns a Bright Score that indicates the quality of the match between the posting and the candidate.
Bright made a nice addition to LinkedIn’s existing job searching services, “Jobs You May Be Interested In” and LinkedIn Recruiter. Startup’s data-based approach to matching is a great way to utilize information-rich LinkedIn profiles; however, we’re yet to see reports of any official results of the acquisition.
Pinterest & Visual Graph, Kosei
When you think of Pinterest, machine learning isn’t the first thing that comes to mind. Then again, not everyone saw Pinterest as the e-commerce giant it has become over the past few months, but the bookmarking network has been serving up many surprises. One of those was the recent acquisition of a data software company specializing in personalized recommendation modeling, Kosei.
In the official announcement of the partnership, Pinterest has identified some areas in which deep learning will bring benefits to the network, particularly: object recognition to boost Pin and product recommendations; boost ad performance and relevance prediction; and detect spam users and content.
The Kosei acquisition happened only a year after Pinterest bought out Visual Graph, a two-man startup specializing in image recognition and search. Both partnerships will go a long way in helping Pinterest become a robust e-commerce engine, by recommending products based on content pinned on the network.