Introduction Notes - Detailed

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 29

What Is Web 2.0?

Web 2.0

The term "Web 2.0" refers to a perceived second generation of web development and
design, that aims to facilitate communication, secure information sharing, interoperability,
and collaboration on the World Wide Web. Web 2.0 concepts have led to the development
and evolution of web-based communities, hosted services, and applications; such as social-
networking sites, video-sharing sites, wikis, blogs, and folksonomies.

The term was first used by Dale Dougherty and Craig Cline and shortly after became notable
after the O'Reilly Media Web 2.0 conference in 2004.[1][2] Although the term suggests a new
version of the World Wide Web, it does not refer to an update to any technical
specifications, but rather to changes in the ways software developers and end-users utilize
the Web. According to Tim O'Reilly:

“ Web 2.0 is the business revolution in the computer industry caused by the move to
the Internet as a platform, and an attempt to understand the rules for success on
that new platform.[3] ”

O'Reilly has noted that the "2.0" refers to the historical context of web businesses "coming
back" after the 2001 collapse of the dot-com bubble, in addition to the distinguishing
characteristics of the projects that survived the bust or thrived thereafter.[4]

Tim Berners-Lee, inventor of the World Wide Web, has questioned whether one can use the
term in any meaningful way, since many of the technological components of Web 2.0 have
existed since the early days of the Web.[5][6]

Definition

Web 2.0 encapsulates the idea of the proliferation of interconnectivity and interactivity of
web-delivered content. Tim O'Reilly regards Web 2.0 as the way that business embraces the
strengths of the web and uses it as a platform. O'Reilly considers that Eric Schmidt's
abridged slogan, don't fight the Internet, encompasses the essence of Web 2.0 — building
applications and services around the unique features of the Internet, as opposed to
expecting the Internet to suit as a platform (effectively "fighting the Internet").

In the opening talk of the first Web 2.0 conference, O'Reilly and John Battelle summarized
what they saw as the themes of Web 2.0. They argued that the web had become a
platform, with software above the level of a single device, leveraging the power of "The
Long Tail," and with data as a driving force. According to O'Reilly and Battelle, an
architecture of participation where users can contribute website content creates network
effects. Web 2.0 technologies tend to foster innovation in the assembly of systems and sites
composed by pulling together features from distributed, independent developers. (This
could be seen as a kind of "open source" or possible "Agile" development process,
consistent with an end to the traditional software adoption cycle, typified by the so-called
"perpetual beta".)
Web 2.0 technology encourages lightweight business models enabled by syndication of
content and of service and by ease of picking-up by early adopters.[7]

O'Reilly provided examples of companies or products that embody these principles in his
description of his four levels in the hierarchy of Web 2.0 sites:

• Level-3 applications, the most "Web 2.0"-oriented, exist only on the Internet,
deriving their effectiveness from the inter-human connections and from the network
effects that Web 2.0 makes possible, and growing in effectiveness in proportion as
people make more use of them. O'Reilly gave eBay, Craigslist, Wikipedia, del.icio.us,
Skype, dodgeball, and AdSense as examples.
• Level-2 applications can operate offline but gain advantages from going online.
O'Reilly cited Flickr, which benefits from its shared photo-database and from its
community-generated tag database.
• Level-1 applications operate offline but gain features online. O'Reilly pointed to
Writely (now Google Docs & Spreadsheets) and iTunes (because of its music-store
portion).
• Level-0 applications work as well offline as online. O'Reilly gave the examples of
MapQuest, Yahoo! Local, and Google Maps (mapping-applications using contributions
from users to advantage could rank as "level 2", like Google Earth). In addition,
Gmail.

Characteristics

Flickr, A Web 2.0 web site that allows users to upload and share photos

Web 2.0 websites allow users to do more than just retrieve information. They can build on
the interactive facilities of "Web 1.0" to provide "Network as platform" computing, allowing
users to run software-applications entirely through a browser.[2] Users can own the data on
a Web 2.0 site and exercise control over that data.[8][2] These sites may have an
"Architecture of participation" that encourages users to add value to the application as they
use it.[2][1] This stands in contrast to traditional websites, the sort that limited visitors to
viewing and whose content only the site's owner could modify. Web 2.0 sites often feature a
rich, user-friendly interface based on Ajax[2][1] and similar client-side interactivity
frameworks, or full client-server application frameworks such as OpenLaszlo, Flex, and the
ZK framework.[8][2].

The concept of Web-as-participation-platform captures many of these characteristics. Bart


Decrem, a founder and former CEO of Flock, calls Web 2.0 the "participatory Web"[9] and
regards the Web-as-information-source as Web 1.0.

The impossibility of excluding group-members who don’t contribute to the provision of


goods from sharing profits gives rise to the possibility that rational members will prefer to
withhold their contribution of effort and free-ride on the contribution of others.[10] According
to Best,[11] the characteristics of Web 2.0 are: rich user experience, user participation,
dynamic content, metadata, web standards and scalability. Further characteristics, such as
openness, freedom[12] and collective intelligence[13] by way of user participation, can also be
viewed as essential attributes of Web 2.0.
Technology overview

The sometimes complex and continually evolving technology infrastructure of Web 2.0
includes server-software, content-syndication, messaging-protocols, standards-oriented
browsers with plugins and extensions, and various client-applications. The differing, yet
complementary approaches of such elements provide Web 2.0 sites with information-
storage, creation, and dissemination challenges and capabilities that go beyond what the
public formerly expected in the environment of the so-called "Web 1.0".

Web 2.0 websites typically include some of the following features/techniques. Andrew
McAfee used the acronym SLATES to refer to them:

Search
the ease of finding information through keyword search which makes the platform
valuable.
Links
guide to important pieces of information. The best pages are the most frequently
linked to.
Authoring
the ability to create constantly updating content over a platform that is shifted from
being the creation of a few to being the constantly updated, interlinked work. In
wikis, the content is iterative in the sense that the people undo and redo each
other's work. In blogs, content is cumulative in that posts and comments of
individuals are accumulated over time.
Tags
categorization of content by creating tags that are simple, one-word descriptions to
facilitate searching and avoid rigid, pre-made categories.
Extensions
automation of some of the work and pattern matching by using algorithms e.g.
amazon.com recommendations.
Signals
the use of RSS (Really Simple Syndication) technology to notify users with any
changes of the content by sending e-mails to them.”[14]

Usage

Government 2.0

Web 2.0 initiatives are being employed within the public sector, giving more currency to the
term Government 2.0. For instance, Web 2.0 websites such as Twitter, YouTube and
Facebook have helped in providing a feasible way for citizens to connect with higher
government officials, which was otherwise nearly impossible. Direct interaction of higher
government authorities with citizens is replacing the age-old 'single-sided communication'
with evolved and more public interaction methodologies.[15]

Higher education

Universities are using Web 2.0 in order to reach out and engage with Generation Y and
other prospective students according to recent reports.[16] Examples of this are: social
networking websites – YouTube, MySpace, Facebook, Youmeo, Twitter and Flickr; upgrading
institutions’ websites in Generation Y-friendly ways (e.g., stand-alone micro-websites with
minimal navigation); and virtual learning environments such as Moodle enable prospective
students to log on and ask questions.[clarification needed]

In addition to free social networking websites, schools have contracted with companies that
provide many of the same services as MySpace and Facebook, but can integrate with their
existing database. Companies such as Harris Connect, iModules, and Publishing Concepts
have developed alumni online community software packages that provide schools with a
way to communicate to their alumni and allow alumni to communicate with each other in a
safe, secure environment.

Public diplomacy

Web 2.0 initiatives have been employed in public diplomacy for the Israeli government. The
country is believed to be the first to have its own official blog,[17] MySpace page,[18] YouTube
channel,[19] Facebook page[20] and a political blog.[21] The Israeli Ministry of Foreign Affairs
started the country's video blog as well as its political blog.[21] The Foreign Ministry also held
a microblogging press conference via Twitter about its war with Hamas, with Consul David
Saranga answering live questions from a worldwide public in common text-messaging
abbreviations.[22] The questions and answers were later posted on IsraelPolitik, the country's
official political blog.[23]

Social Work 2.0

Social work 2.0 represents the use of interactive web technologies in the delivery of social
services. The term first appeared in press in 2008. [24] In March, 2009, the New Social
Worker Online started a technology blog called Social Work 2.0.[25] Social workers use web
2.0 technologies for clinical practice, community organizing and administrative and policy
functions. Social workers use chat programs to provide real-time (synchronous) online
therapy, or e-therapy. [26] Web 2.0 technologies also allow for self-directed treatment
through web-based modules. Self-directed treatments, such as Australia’s MoodGYM [27], are
based on a CBT model and have demonstrated success in reducing symptoms of depression
and anxiety.[28] Self-directed treatments have the potential to provide services to thousands
of people at minimal costs.[29] Community organizers uses interactive web technologies to
rally constituents and identify services in traditionally disadvantaged neighborhoods. For
example, the National Association of Social Workers provides updates on legislative actions
via Twitter.[30] Geographical Information Systems (GIS) technologies can be used to analyze
information about specific geographical regions, such as neighborhoods, zip codes, cities or
counties. Advocacy groups can analyze campaign demographics to improve voter
participation on key social services issues. Consumer rights advocates can use GIS to
identify where services are distributed in an area in order to better advocate for access to
service and improved service delivery. The use of web-based technologies is not without its
problems for social work. Social workers are state regulated, leading to concerns about
providing services over the internet to people in different states. Current licensure laws do
not apply to services provided outside of the licensing state. Clients from one state who
wish to file a complaint or lawsuit against an e-therapist in another state are in a regulatory
limbo. When a clinician in Pennsylvania provides services to a client in Texas, the question
is, which state’s laws govern? [31]. Until licensing laws are updated to regulate out-of-state
practice, social workers should assume that practicing beyond state borders violates their
license.[32] Despite these limitations, the use of Web 2.0 technologies provides an important
advance in social service delivery.
Web-based applications and desktops

Ajax has prompted the development of websites that mimic desktop applications, such as
word processing, the spreadsheet, and slide-show presentation. WYSIWYG wiki sites
replicate many features of PC authoring applications. Still other sites perform collaboration
and project management functions. In 2006 Google, Inc. acquired one of the best-known
sites of this broad class, Writely.[33]

Several browser-based "operating systems" have emerged, including EyeOS[34] and YouOS.
[35]
Although coined as such, many of these services function less like a traditional operating
system and more as an application platform. They mimic the user experience of desktop
operating-systems, offering features and applications similar to a PC environment, as well
as the added ability of being able to run within any modern browser.

Numerous web-based application services appeared during the dot-com bubble of 1997–
2001 and then vanished, having failed to gain a critical mass of customers. In 2005, WebEx
acquired one of the better-known of these, Intranets.com, for USD45 million.[36]

Internet applications

XML and RSS

Advocates of "Web 2.0" may regard syndication of site content as a Web 2.0 feature,
involving as it does standardized protocols, which permit end-users to make use of a site's
data in another context (such as another website, a browser plugin, or a separate desktop
application). Protocols which permit syndication include RSS (Really Simple Syndication —
also known as "web syndication"), RDF (as in RSS 1.1), and Atom, all of them XML-based
formats. Observers have started to refer to these technologies as "Web feed" as the
usability of Web 2.0 evolves and the more user-friendly Feeds icon supplants the RSS icon.

Specialized protocols

Specialized protocols such as FOAF and XFN (both for social networking) extend the
functionality of sites or permit end-users to interact without centralized websites.

Web APIs

Machine-based interaction, a common feature of Web 2.0 sites, uses two main approaches
to Web APIs, which allow web-based access to data and functions: REST and SOAP.

1. REST (Representational State Transfer) Web APIs use HTTP alone to interact, with
XML (eXtensible Markup Language) or JSON payloads;
2. SOAP involves POSTing more elaborate XML messages and requests to a server that
may contain quite complex, but pre-defined, instructions for the server to follow.

Often servers use proprietary APIs, but standard APIs (for example, for posting to a blog or
notifying a blog update) have also come into wide use. Most communications through APIs
involve XML or JSON payloads.

See also Web Services Description Language (WSDL) (the standard way of publishing a
SOAP API) and this list of Web Service specifications.
Economics

Analysis of the economic implications of "Web 2.0" applications and loosely-associated


technologies such as wikis, blogs, social-networking, open-source, open-content, file-
sharing, peer-production, etc. has also gained scientific attention. This area of research
investigates the implications Web 2.0 has for an economy and the principles underlying the
economy of Web 2.0.

Cass Sunstein's book "Infotopia" discussed the Hayekian nature of collaborative production,
characterized by decentralized decision-making, directed by (often non-monetary) prices
rather than central planners in business or government.

Don Tapscott and Anthony D. Williams argue in their book Wikinomics: How Mass
Collaboration Changes Everything (2006) that the economy of "the new web" depends on
mass collaboration. Tapscott and Williams regard it as important for new media companies
to find ways of how to make profit with the help of Web 2.0. [citation needed] The prospective
Internet-based economy that they term "Wikinomics" would depend on the principles of
openness, peering, sharing, and acting globally. They identify seven Web 2.0 business-
models (peer pioneers, ideagoras, prosumers, new Alexandrians, platforms for participation,
global plantfloor, wiki workplace).[citation needed]

Organizations could make use of these principles and models in order to prosper with the
help of Web 2.0-like applications: "Companies can design and assemble products with their
customers, and in some cases customers can do the majority of the value creation". [37] "In
each instance the traditionally passive buyers of editorial and advertising take active,
participatory roles in value creation."[38] Tapscott and Williams suggest business strategies
as "models where masses of consumers, employees, suppliers, business partners, and even
competitors cocreate value in the absence of direct managerial control". [39] Tapscott and
Williams see the outcome as an economic democracy.

Some other views in the scientific debate agree with Tapscott and Williams that value-
creation increasingly depends on harnessing open source/content, networking, sharing, and
peering, but disagree that this will result in an economic democracy, predicting a subtle
form and deepening of exploitation, in which Internet-based global outsourcing reduces
labor-costs by transferring jobs from workers in wealthy nations to workers in poor nations.
In such a view, the economic implications of a new web might include on the one hand the
emergence of new business-models based on global outsourcing, whereas on the other hand
non-commercial online platforms could undermine profit-making and anticipate a co-
operative economy. For example, Tiziana Terranova speaks of "free labor" (performed
without payment) in the case where prosumers produce surplus value in the circulation-
sphere of the cultural industries.[40]

Some examples of Web 2.0 business models that attempt to generate revenues in online
shopping and online marketplaces are referred to as social commerce and social shopping.
Social commerce involves user-generated marketplaces where individuals can set up online
shops and link their shops in a networked marketplace, drawing on concepts of electronic
commerce and social networking. Social shopping involves customers interacting with each
other while shopping, typically online, and often in a social network environment. This
involvement of customers in a collaborative business model is also known as customer
involvement management (CIM). Academic research on the economic value implications of
social commerce and having sellers in online marketplaces link to each others' shops has
been conducted by researchers in the business school at Columbia University.[41]
Criticism

The criticism exists that "Web 2.0" does not represent a new version of the World Wide Web
at all, but merely continues to use so-called "Web 1.0" technologies and concepts.
Techniques such as AJAX do not replace underlying protocols like HTTP, but add an
additional layer of abstraction on top of them. Many of the ideas of Web 2.0 had already
been featured in implementations on networked systems well before the term "Web 2.0"
emerged. Amazon.com, for instance, has allowed users to write reviews and consumer
guides since its launch in 1995, in a form of self-publishing. Amazon also opened its API to
outside developers in 2002.[42] Previous developments also came from research in
computer-supported collaborative learning and computer-supported cooperative work and
from established products like Lotus Notes and Lotus Domino.

In a podcast interview, Tim Berners-Lee described the term "Web 2.0" as a "piece of
jargon":

"Nobody really knows what it means...If Web 2.0 for you is blogs and wikis, then that is
people to people. But that was what the Web was supposed to be all along."[5]

Other criticism has included the term “a second bubble” (referring to the Dot-com bubble of
circa 1995–2001), suggesting that too many Web 2.0 companies attempt to develop the
same product with a lack of business models. The Economist has written of "Bubble 2.0".[43]
Venture capitalist Josh Kopelman noted that Web 2.0 had excited only 530,651 people (the
number of subscribers at that time to TechCrunch, a Weblog covering Web 2.0 matters), too
few users to make them an economically viable target for consumer applications. [44]
Although Bruce Sterling reports he's a fan of Web 2.0, he thinks it is now dead as a rallying
concept.[45]

Critics have cited the language used to describe the hype cycle of Web 2.0 [46] as an example
of Techno-utopianist rhetoric.[47] Web 2.0 is not the first example of communication creating
a false, hyper-inflated sense of the value of technology and its impact on culture. The dot
com boom and subsequent bust in 2000 was a culmination of rhetoric of the technological
sublime in terms that would later make their way into Web 2.0 jargon. Indeed, several
years before the dot com stock market crash the then-Federal Reserve chairman Alan
Greenspan described the run up of stock values as irrational exuberance. Shortly before the
crash of 2000 a book by Shiller, Robert J. Irrational Exuberance. Princeton, NJ: Princeton
University Press, 2000. was released detailing the overly optimistic euphoria of the dot com
industry.

Trademark

In November 2004, CMP Media applied to the USPTO for a service mark on the use of the
term "WEB 2.0" for live events.[48] On the basis of this application, CMP Media sent a cease-
and-desist demand to the Irish non-profit organization IT@Cork on May 24, 2006,[49] but
retracted it two days later.[50] The "WEB 2.0" service mark registration passed final PTO
Examining Attorney review on May 10, 2006, and was registered on June 27, 2006.[48] The
European Union application (application number 004972212, which would confer
unambiguous status in Ireland) remains currently pending after its filing on March 23, 2006.
The bursting of the dot-com bubble in the fall of 2001 marked a turning point for the web.
Many people concluded that the web was overhyped, when in fact bubbles and consequent
shakeouts appear to be a common feature of all technological revolutions. Shakeouts
typically mark the point at which an ascendant technology is ready to take its place at
center stage. The pretenders are given the bum's rush, the real success stories show their
strength, and there begins to be an understanding of what separates one from the other.

The concept of "Web 2.0" began with a conference brainstorming session between O'Reilly
and MediaLive International. Dale Dougherty, web pioneer and O'Reilly VP, noted that far
from having "crashed", the web was more important than ever, with exciting new
applications and sites popping up with surprising regularity. What's more, the companies
that had survived the collapse seemed to have some things in common. Could it be that the
dot-com collapse marked some kind of turning point for the web, such that a call to action
such as "Web 2.0" might make sense? We agreed that it did, and so the Web 2.0
Conference was born.

In the year and a half since, the term "Web 2.0" has clearly taken hold, with more than 9.5
million citations in Google. But there's still a huge amount of disagreement about just what
Web 2.0 means, with some people decrying it as a meaningless marketing buzzword, and
others accepting it as the new conventional wisdom.

This article is an attempt to clarify just what we mean by Web 2.0.

In our initial brainstorming, we formulated our sense of Web 2.0 by example:

Web 1.0 Web 2.0

DoubleClick --> Google AdSense

Ofoto --> Flickr

Akamai --> BitTorrent

mp3.com --> Napster

Britannica Online --> Wikipedia

personal websites --> blogging

evite --> upcoming.org and EVDB

domain name speculation --> search engine optimization

page views --> cost per click

screen scraping --> web services

publishing --> participation

content management
--> wikis
systems
directories (taxonomy) --> tagging ("folksonomy")

stickiness --> syndication

The list went on and on. But what was it that made us identify one application or approach
as "Web 1.0" and another as "Web 2.0"? (The question is particularly urgent because the
Web 2.0 meme has become so widespread that companies are now pasting it on as a
marketing buzzword, with no real understanding of just what it means. The question is
particularly difficult because many of those buzzword-addicted startups are definitely not
Web 2.0, while some of the applications we identified as Web 2.0, like Napster and
BitTorrent, are not even properly web applications!) We began trying to tease out the
principles that are demonstrated in one way or another by the success stories of web 1.0
and by the most interesting of the new applications.

1. The Web As Platform

Like many important concepts, Web 2.0 doesn't have a hard boundary, but rather, a
gravitational core. You can visualize Web 2.0 as a set of principles and practices that tie
together a veritable solar system of sites that demonstrate some or all of those principles,
at a varying distance from that core.

Figure 1 shows a "meme map" of Web 2.0 that was developed at a brainstorming session
during FOO Camp, a conference at O'Reilly Media. It's very much a work in progress, but
shows the many ideas that radiate out from the Web 2.0 core.

For example, at the first Web 2.0 conference, in October 2004, John Battelle and I listed a
preliminary set of principles in our opening talk. The first of those principles was "The web
as platform." Yet that was also a rallying cry of Web 1.0 darling Netscape, which went down
in flames after a heated battle with Microsoft. What's more, two of our initial Web 1.0
exemplars, DoubleClick and Akamai, were both pioneers in treating the web as a platform.
People don't often think of it as "web services", but in fact, ad serving was the first widely
deployed web service, and the first widely deployed "mashup" (to use another term that has
gained currency of late). Every banner ad is served as a seamless cooperation between two
websites, delivering an integrated page to a reader on yet another computer. Akamai also
treats the network as the platform, and at a deeper level of the stack, building a
transparent caching and content delivery network that eases bandwidth congestion.

Nonetheless, these pioneers provided useful contrasts because later entrants have taken
their solution to the same problem even further, understanding something deeper about the
nature of the new platform. Both DoubleClick and Akamai were Web 2.0 pioneers, yet we
can also see how it's possible to realize more of the possibilities by embracing additional
Web 2.0 design patterns.

Let's drill down for a moment into each of these three cases, teasing out some of the
essential elements of difference.

Netscape vs. Google

If Netscape was the standard bearer for Web 1.0, Google is most certainly the standard
bearer for Web 2.0, if only because their respective IPOs were defining events for each era.
So let's start with a comparison of these two companies and their positioning.

Netscape framed "the web as platform" in terms of the old software paradigm: their flagship
product was the web browser, a desktop application, and their strategy was to use their
dominance in the browser market to establish a market for high-priced server products.
Control over standards for displaying content and applications in the browser would, in
theory, give Netscape the kind of market power enjoyed by Microsoft in the PC market.
Much like the "horseless carriage" framed the automobile as an extension of the familiar,
Netscape promoted a "webtop" to replace the desktop, and planned to populate that webtop
with information updates and applets pushed to the webtop by information providers who
would purchase Netscape servers.

In the end, both web browsers and web servers turned out to be commodities, and value
moved "up the stack" to services delivered over the web platform.

Google, by contrast, began its life as a native web application, never sold or packaged, but
delivered as a service, with customers paying, directly or indirectly, for the use of that
service. None of the trappings of the old software industry are present. No scheduled
software releases, just continuous improvement. No licensing or sale, just usage. No porting
to different platforms so that customers can run the software on their own equipment, just
a massively scalable collection of commodity PCs running open source operating systems
plus homegrown applications and utilities that no one outside the company ever gets to see.

At bottom, Google requires a competency that Netscape never needed: database


management. Google isn't just a collection of software tools, it's a specialized database.
Without the data, the tools are useless; without the software, the data is unmanageable.
Software licensing and control over APIs--the lever of power in the previous era--is
irrelevant because the software never need be distributed but only performed, and also
because without the ability to collect and manage the data, the software is of little use. In
fact, the value of the software is proportional to the scale and dynamism of the data it helps
to manage.

Google's service is not a server--though it is delivered by a massive collection of internet


servers--nor a browser--though it is experienced by the user within the browser. Nor does
its flagship search service even host the content that it enables users to find. Much like a
phone call, which happens not just on the phones at either end of the call, but on the
network in between, Google happens in the space between browser and search engine and
destination content server, as an enabler or middleman between the user and his or her
online experience.

While both Netscape and Google could be described as software companies, it's clear that
Netscape belonged to the same software world as Lotus, Microsoft, Oracle, SAP, and other
companies that got their start in the 1980's software revolution, while Google's fellows are
other internet applications like eBay, Amazon, Napster, and yes, DoubleClick and Akamai.

Folksonomy
Not to be confused with folk taxonomy.

Folksonomy (also known as collaborative tagging, social classification, social


indexing, and social tagging) is the practice and method of collaboratively creating and
managing tags to annotate and categorize content. Folksonomy describes the bottom-up
classification systems that emerge from social tagging.[1] In contrast to traditional subject
indexing, metadata is generated not only by experts but also by creators and consumers of
the content. Usually, freely chosen keywords are used instead of a controlled vocabulary.[2]
Folksonomy (from folk + taxonomy) is a user-generated taxonomy.

Folksonomies became popular on the Web around 2004 as part of social software
applications including social bookmarking and annotating photographs. Tagging, which is
characteristic of Web 2.0 services, allows non-expert users to collectively classify and find
information. Some websites include tag clouds as a way to visualize tags in a folksonomy.

Typically, folksonomies are Internet-based, although they are also used in other contexts.
Aggregating the tags of many users creates a folksonomy. [1] Aggregation is the pulling
together of all of the tags in an automated way.[1] Folksonomic tagging is intended to make
a body of information increasingly easy to search, discover, and navigate over time. A well-
developed folksonomy is ideally accessible as a shared vocabulary that is both originated
by, and familiar to, its primary users. Two widely cited examples of websites using
folksonomic tagging are Flickr and Delicious, although Flickr may not be a good example of
folksonomy.[3]

As folksonomies develop in Internet-mediated social environments, users can discover who


used a given tag and see the other tags that this person has used. In this way, folksonomy
users can discover the tag sets of another user who tends to interpret and tag content in a
way that makes sense to them. The result can be a rewarding gain in the user's capacity to
find related content (a practice known as "pivot browsing"). Part of the appeal of
folksonomy is its inherent subversiveness: when faced with the choice of the search tools
that Web sites provide, folksonomies can be seen as a rejection of the search engine status
quo in favor of tools that are created by the community.
Folksonomy creation and searching tools are not part of the underlying World Wide Web
protocols. Folksonomies arise in Web-based communities where provisions are made at the
site level for creating and using tags. These communities are established to enable Web
users to label and share user-generated content, such as photographs, or to collaboratively
label existing content, such as Web sites, books, works in the scientific and scholarly
literatures, and blog entries.

Practical evaluation

Folksonomy is criticized because its lack of terminological control causes it to be more likely
to produce unreliable and inconsistent results. If tags are freely chosen (instead of taken
from a given vocabulary), synonyms (multiple tags for the same concept), homonymy
(same tag used with different meaning), and polysemy (same tag with multiple related
meanings) are likely to arise, lowering the efficiency of content indexing and searching.[4]
Other reasons for meta noise are the lack of stemming (normalization of word inflections)
and the heterogeneity of users and contexts.

Classification systems have several problems: they can be slow to change, they reflect (and
reinforce) a particular worldview, they are rooted in the culture and era that created them,
and they can be absurd at times.[1] Idiosyncratic folksonomic classification within a clique
can especially reinforce pre-existing viewpoints. Folksonomies are routinely generated by
people who have spent a great deal of time interacting with the content they tag, and may
not properly identify the content's relationship to external items.

For example, items tagged as "Web 2.0" represent seemingly inconsistent and contradictory
resources.[5] The lack of a hierarchical or systematic structure for the tagging system makes
the terms relevant to what they are describing, but often fails to show their relevancy or
relationship to other objects of the same or similar type.

Origin

The term folksonomy is generally attributed to Thomas Vander Wal.[6][7] It is a portmanteau


of the words folk (or folks) and taxonomy that specifically refers to subject indexing
systems created within Internet communities. Folksonomy has little to do with taxonomy—
the latter refers to an ontological, hierarchical way of categorizing, while folksonomy
establishes categories (each tag is a category) that are theoretically "equal" to each other
(i.e., there is no hierarchy, or parent-child relation between different tags).

Early attempts and experiments include the World Wide Web Consortium's Annotea project
with user-generated tags in 2002.[8] According to Vander Wal, a folksonomy is "tagging that
works".

Folksonomy is unrelated to folk taxonomy, a cultural practice that has been widely
documented in anthropological and folkloristic work. Folk taxonomies are culturally supplied,
intergenerationally transmitted, and relatively stable classification systems that people in a
given culture use to make sense of the entire world around them (not just the Internet).[9]
Folksonomy and the Semantic Web

Folksonomy may hold the key to developing a Semantic Web, in which every Web page
contains machine-readable metadata that describes its content.[10] Such metadata would
dramatically improve the precision (the percentage of relevant documents) in search engine
retrieval lists.[11] However, it is difficult to see how the large and varied community of Web
page authors could be persuaded to add metadata to their pages in a consistent, reliable
way; web authors who wish to do so experience high entry costs because metadata systems
are time-consuming to learn and use.[12] For this reason, few Web authors make use of the
simple Dublin Core metadata standard, even though the use of Dublin Core meta-tags could
increase their pages' prominence in search engine retrieval lists.[13] In contrast to more
formalized, top-down classifications using controlled vocabularies, folksonomy is a
distributed classification system with low entry costs.[14]

Enterprise

Since folksonomies are user-generated and therefore inexpensive to implement, advocates


of folksonomy believe that it provides a useful low-cost alternative to more traditional,
institutionally supported taxonomies or controlled vocabularies. An employee-generated
folksonomy could therefore be seen as an "emergent enterprise taxonomy".[15] Some
folksonomy advocates believe that it is useful in facilitating workplace democracy and the
distribution of management tasks among people actually doing the work.

However, workplace democracy is a utopian concept at odds with the governing reality of
the enterprise, the majority of which exist and thrive as hierarchically-structured
corporations not especially aligned to democratically informed governance and decision-
making. Also, as a distribution method, the folksonomy may, indeed, facilitate workflow, but
it does not guarantee that the information worker will tag and, then, tag consistently, in an
unbiased way, and without intentional malice directed at the enterprise.

Folksonomy and top-down taxonomies

Commentators and information architects have contrasted the hierarchical approach of top-
down taxonomies with the folksonomy approach. The former approach is prevalent and
represented by many practical examples.

One such example is Yahoo!, one of the earliest general directories for content on the Web.
Yahoo! and other similar sites organized and presented links under a fixed hierarchy. This
approach imposed one set of tags and one sort order, although hyperlinking enabled at
least a limited ability to traverse distant nodes in the hierarchy based on related subject
matter. Clay Shirky is one commentator who has offered explanations for why this approach
is limited.[16]

Compromise with top-down taxonomies

The differences between taxonomies and folksonomies may have been overestimated.[17] A
possible solution to the shortcomings of folksonomies and controlled vocabulary is a
collabulary, which can be conceptualized as a compromise between the two: a team of
classification experts collaborates with content consumers to create rich, but more
systematic content tagging systems. A collabulary arises much the way a folksonomy does,
but it is developed in a spirit of collaboration with experts in the field. The result is a system
that combines the benefits of folksonomies—low entry costs, a rich vocabulary that is
broadly shared and comprehensible by the user base, and the capacity to respond quickly to
language change—without the errors that inevitably arise in naive, unsupervised
folksonomies.

The ability to group tags, such as that provided by Delicious's "bundles", [18] provides one
way for taxonomists to work with an underlying folksonomy. This allows structure to be
added without the need for direct collaboration between classification experts and content
consumers.

Another possible solution is a taxonomy-directed-folksonomy,[19] which relies on the user


interfaces to suggest tags from a formal taxonomy, but allows many users to use their own
tags.

Main problems of folksonomy tagging

Four main problems of folksonomy tagging are plurals, polysemy, synonymy, and depth
(specificity) of tagging. Folksonomy-based systems can employ optional authority control of
subject keywords, place, personal, or corporate names and resource titles, by connecting
the system to established authority control files or controlled vocabularies using new
techniques. A folksonomy-based system needs a controlled vocabulary and a suggestion-
based system.[20]

Software as a service

Software as a Service (SaaS, typically pronounced 'sass') is a model of software


deployment where an application is licensed for use as a service provided to customers on
demand. On demand licensing and use alleviates the customer's burden of equipping a
device with every application. It also reduces traditional End User Licensing Agreement
(EULA) software maintenance, ongoing operation patches, and patch support complexity in
an organization. On demand licensing enables software to become a variable expense,
rather than a fixed cost at the time of purchase. It also enables licensing only the amount of
software needed versus traditional licenses per device. SaaS also enables the buyer to share
licenses across their organization and between organizations, to reduce the cost of acquiring
EULAs for every device in their firm.

Using SaaS can also conceivably reduce the up-front expense of software purchases,
through less costly, on-demand pricing from hosting service providers. SaaS lets software
vendors control and limit use, prohibits copies and distribution, and facilitates the control of
all derivative versions of their software. SaaS centralized control often allows the vendor or
supplier to establish an ongoing revenue stream with multiple businesses and users without
pre loading software in each device in an organization . The SaaS software vendor may host
the application on its own web server, download the application to the consumer device and
disable it after use or after the on demand contract expires. The on demand function may
be handled internally to share licenses within a firm or by a third-party application service
provider (ASP) sharing licenses between firms. This sharing of end user licenses and on
demand use may also reduce investment in server hardware or the shift of server use to
SaaS suppliers of applications file services.

History

The concept of "software as a service" started to circulate prior to 1999 and was considered
to be "gaining acceptance in the marketplace" in Bennett et al., 1999 paper on "Service
Based Software" [1].

Whilst the term "software as a service" was in common use, the CamelCase acronym "SaaS"
was allegedly not coined until several years later in a white paper called "Strategic
Backgrounder: Software as a Service" [2] by the Software & Information Industry's
eBusiness Division published in Feb. 2001, but written in fall of 2000 according to internal
Association records.

Philosophy

As a term, SaaS is generally associated with business software and is typically thought of as
a low-cost way for businesses to obtain rights to use software as needed versus licensing all
devices with all applications. The on demand licensing enables the benefits of commercially
licensed use without the associated complexity and potential high initial cost of equipping
every device with the applications that are only used when needed.

Virtually all software is well suited to the SaaS model. Many Unix applications already have
this functionality whereas EULA applications have never had this flexibility before SaaS. A
licensed copy of a word processor, for example, had to reside on the machine to create a
document. The equipped program has no intrinsic value loaded on a computer that is turned
off for the night. Worse yet, the same employee may need another fully paid license to write
or edit a report at home on their own computer, while the work license is inoperable.
Remote administration software attempts to resolve this issue through sharing CPU controls
instead of licensing on demand. While promising, it requires leaving the licensed host
computer on and it creates security issues from the remote accessing to run an application.
SaaS achieves efficiencies by enabling the on demand licensing and management of the
information and output, independent of the hardware location.

Other application areas such as Customer relationship management (CRM), video


conferencing, human resources, IT service management, accounting, IT security, web
analytics, web content management and e-mail are some of the initial markets showing
SaaS success. The distinction between SaaS and earlier applications delivered over the
Internet is that SaaS solutions were developed specifically to leverage web technologies
such as the browser, thereby making them web-native. The data design and architecture of
SaaS applications are specifically built with a 'multi-tenant' backend, thus enabling multiple
customers or users to access a shared data model. This further differentiates SaaS from
client/server or 'ASP' (Application Service Provider) solutions in that SaaS providers are
leveraging enormous economies of scale in the deployment, management, support and
through the Software Development Lifecycle.

Key characteristics

The key characteristics of SaaS software, according to IDC, include:[3][dead link]

• network-based access to, and management of, commercially available software


• activities that are managed from central locations rather than at each customer's
site, enabling customers to access applications remotely via the Web
• application delivery that typically is closer to a one-to-many model (single instance,
multi-tenant architecture) than to a one-to-one model, including architecture,
pricing, partnering, and management characteristics
• centralized feature updating, which obviates the need for downloadable patches and
upgrades.
• SaaS is often used in a larger network of communicating software - either as part of
a mashup or as a plugin to a platform as a service. Service oriented architecture is
naturally more complex than traditional models of software deployment.

SaaS applications are generally priced on a per-user basis, sometimes with a relatively
small minimum number of users and often with additional fees for extra bandwidth and
storage. SaaS revenue streams to the vendor are therefore lower initially than traditional
software license fees, but are also recurring, and therefore viewed as more predictable,
much like maintenance fees for licensed software.

In addition to the characteristics mentioned above, SaaS software turns the tragedy of the
commons on its head and frequently has these additional benefits:

• More feature requests from users since there is frequently no marginal cost for
requesting new features;
• Faster releases of new features since the entire community of users benefits from
new functionality; and
• The embodiment of recognized best practices since the community of users drives
the software publisher to support the best practice.

Implementation

According to Microsoft, SaaS architectures generally can be classified as belonging to one of


four "maturity levels," whose key attributes are configurability, multi-tenant efficiency, and
scalability.[4] Each level is distinguished from the previous one by the addition of one of
those three attributes:

• Level 1 - Ad-Hoc/Custom: At the first level of maturity, each customer has its own
customized version of the hosted application and runs its own instance of the
application on the host's servers. Migrating a traditional non-networked or client-
server application to this level of SaaS typically requires the least development effort
and reduces operating costs by consolidating server hardware and administration.

• Level 2 - Configurable: The second maturity level provides greater program flexibility
through configurable metadata, so that many customers can use separate instances
of the same application code. This allows the vendor to meet the different needs of
each customer through detailed configuration options, while simplifying maintenance
and updating of a common code base.

• Level 3 - Configurable, Multi-Tenant-Efficient: The third maturity level adds multi-


tenancy to the second level, so that a single program instance serves all customers.
This approach enables more efficient use of server resources without any apparent
difference to the end user, but ultimately is limited in its scalability.
• Level 4 - Scalable, Configurable, Multi-Tenant-Efficient: At the fourth and final SaaS
maturity level, scalability is added through a multitier architecture supporting a load-
balanced farm of identical application instances, running on a variable number of
servers. The system's capacity can be increased or decreased to match demand by
adding or removing servers, without the need for any further alteration of application
software architecture.

Virtualization also may be used in SaaS architectures, either in addition to multi-tenancy, or


in place of it.[5] One of the principal benefits of virtualization is that it can increase the
system's capacity without additional programming. On the other hand, a considerable
amount of programming may be required to construct a more efficient, multi-tenant
application. Combining multi-tenancy and virtualization provides still greater flexibility to
tune the system for optimal performance.[6] In addition to full operating system-level
virtualization, other virtualization techniques applied to SaaS include application
virtualization and virtual appliances.

Various types of software components and frameworks may be employed in the


development of SaaS applications. These tools can reduce the time to market and cost of
converting a traditional on-premise software product or building and deploying a new SaaS
solution. Examples include components for subscription management, grid computing
software, web application frameworks, and complete SaaS platform products.[7]

SaaS and SOA

Much like any other software, Software as a Service can also take advantage of Service
Oriented Architecture to enable software applications to communicate with each other. Each
software service can act as a Service provider, exposing its functionality to other
applications via public brokers, and can also act as a Service requester, incorporating data
and functionality from other services.

Data and Web 2.0

Blogging and the Wisdom of Crowds

One of the most highly touted features of the Web 2.0 era is the rise of blogging. Personal
home pages have been around since the early days of the web, and the personal diary and
daily opinion column around much longer than that, so just what is the fuss all about?

At its most basic, a blog is just a personal home page in diary format. But as Rich Skrenta
notes, the chronological organization of a blog "seems like a trivial difference, but it drives
an entirely different delivery, advertising and value chain."

One of the things that has made a difference is a technology called RSS. RSS is the most
significant advance in the fundamental architecture of the web since early hackers realized
that CGI could be used to create database-backed websites. RSS allows someone to link not
just to a page, but to subscribe to it, with notification every time that page changes.
Skrenta calls this "the incremental web." Others call it the "live web".

Now, of course, "dynamic websites" (i.e., database-backed sites with dynamically generated
content) replaced static web pages well over ten years ago. What's dynamic about the live
web are not just the pages, but the links. A link to a weblog is expected to point to a
perennially changing page, with "permalinks" for any individual entry, and notification for
each change. An RSS feed is thus a much stronger link than, say a bookmark or a link to a
single page.

RSS also means that the web browser is not the only means of viewing a web page. While
some RSS aggregators, such as Bloglines, are web-based, others are desktop clients, and
still others allow users of portable devices to subscribe to constantly updated content.

RSS is now being used to push not just notices of new blog entries, but also all kinds of data
updates, including stock quotes, weather data, and photo availability. This use is actually a
return to one of its roots: RSS was born in 1997 out of the confluence of Dave Winer's
"Really Simple Syndication" technology, used to push out blog updates, and Netscape's
"Rich Site Summary", which allowed users to create custom Netscape home pages with
regularly updated data flows. Netscape lost interest, and the technology was carried forward
by blogging pioneer Userland, Winer's company. In the current crop of applications, we see,
though, the heritage of both parents.

But RSS is only part of what makes a weblog different from an ordinary web page. Tom
Coates remarks on the significance of the permalink:

It may seem like a trivial piece of functionality now, but it was effectively the device that
turned weblogs from an ease-of-publishing phenomenon into a conversational mess of
overlapping communities. For the first time it became relatively easy to gesture directly at a
highly specific post on someone else's site and talk about it. Discussion emerged. Chat
emerged. And - as a result - friendships emerged or became more entrenched. The
permalink was the first - and most successful - attempt to build bridges between weblogs.

In many ways, the combination of RSS and permalinks adds many of the features of NNTP,
the Network News Protocol of the Usenet, onto HTTP, the web protocol. The "blogosphere"
can be thought of as a new, peer-to-peer equivalent to Usenet and bulletin-boards, the
conversational watering holes of the early internet. Not only can people subscribe to each
others' sites, and easily link to individual comments on a page, but also, via a mechanism
known as trackbacks, they can see when anyone else links to their pages, and can respond,
either with reciprocal links, or by adding comments.

Interestingly, two-way links were the goal of early hypertext systems like Xanadu.
Hypertext purists have celebrated trackbacks as a step towards two way links. But note that
trackbacks are not properly two-way--rather, they are really (potentially) symmetrical one-
way links that create the effect of two way links. The difference may seem subtle, but in
practice it is enormous. Social networking systems like Friendster, Orkut, and LinkedIn,
which require acknowledgment by the recipient in order to establish a connection, lack the
same scalability as the web. As noted by Caterina Fake, co-founder of the Flickr photo
sharing service, attention is only coincidentally reciprocal. (Flickr thus allows users to set
watch lists--any user can subscribe to any other user's photostream via RSS. The object of
attention is notified, but does not have to approve the connection.)

If an essential part of Web 2.0 is harnessing collective intelligence, turning the web into a
kind of global brain, the blogosphere is the equivalent of constant mental chatter in the
forebrain, the voice we hear in all of our heads. It may not reflect the deep structure of the
brain, which is often unconscious, but is instead the equivalent of conscious thought. And as
a reflection of conscious thought and attention, the blogosphere has begun to have a
powerful effect.
First, because search engines use link structure to help predict useful pages, bloggers, as
the most prolific and timely linkers, have a disproportionate role in shaping search engine
results. Second, because the blogging community is so highly self-referential, bloggers
paying attention to other bloggers magnifies their visibility and power. The "echo chamber"
that critics decry is also an amplifier.

If it were merely an amplifier, blogging would be uninteresting. But like Wikipedia, blogging
harnesses collective intelligence as a kind of filter. What James Suriowecki calls "the wisdom
of crowds" comes into play, and much as PageRank produces better results than analysis of
any individual document, the collective attention of the blogosphere selects for value.

While mainstream media may see individual blogs as competitors, what is really unnerving
is that the competition is with the blogosphere as a whole. This is not just a competition
between sites, but a competition between business models. The world of Web 2.0 is also the
world of what Dan Gillmor calls "we, the media," a world in which "the former audience",
not a few people in a back room, decides what's important.

Data is the Next Intel Inside

Every significant internet application to date has been backed by a specialized database:
Google's web crawl, Yahoo!'s directory (and web crawl), Amazon's database of products,
eBay's database of products and sellers, MapQuest's map databases, Napster's distributed
song database. As Hal Varian remarked in a personal conversation last year, "SQL is the
new HTML." Database management is a core competency of Web 2.0 companies, so much
so that we have sometimes referred to these applications as "infoware" rather than merely
software.

This fact leads to a key question: Who owns the data?

In the internet era, one can already see a number of cases where control over the database
has led to market control and outsized financial returns. The monopoly on domain name
registry initially granted by government fiat to Network Solutions (later purchased by
Verisign) was one of the first great moneymakers of the internet. While we've argued that
business advantage via controlling software APIs is much more difficult in the age of the
internet, control of key data sources is not, especially if those data sources are expensive to
create or amenable to increasing returns via network effects.

Look at the copyright notices at the base of every map served by MapQuest,
maps.yahoo.com, maps.msn.com, or maps.google.com, and you'll see the line "Maps
copyright NavTeq, TeleAtlas," or with the new satellite imagery services, "Images copyright
Digital Globe." These companies made substantial investments in their databases (NavTeq
alone reportedly invested $750 million to build their database of street addresses and
directions. Digital Globe spent $500 million to launch their own satellite to improve on
government-supplied imagery.) NavTeq has gone so far as to imitate Intel's familiar Intel
Inside logo: Cars with navigation systems bear the imprint, "NavTeq Onboard." Data is
indeed the Intel Inside of these applications, a sole source component in systems whose
software infrastructure is largely open source or otherwise commodified.

The now hotly contested web mapping arena demonstrates how a failure to understand the
importance of owning an application's core data will eventually undercut its competitive
position. MapQuest pioneered the web mapping category in 1995, yet when Yahoo!, and
then Microsoft, and most recently Google, decided to enter the market, they were easily
able to offer a competing application simply by licensing the same data.

Contrast, however, the position of Amazon.com. Like competitors such as


Barnesandnoble.com, its original database came from ISBN registry provider R.R. Bowker.
But unlike MapQuest, Amazon relentlessly enhanced the data, adding publisher-supplied
data such as cover images, table of contents, index, and sample material. Even more
importantly, they harnessed their users to annotate the data, such that after ten years,
Amazon, not Bowker, is the primary source for bibliographic data on books, a reference
source for scholars and librarians as well as consumers. Amazon also introduced their own
proprietary identifier, the ASIN, which corresponds to the ISBN where one is present, and
creates an equivalent namespace for products without one. Effectively, Amazon "embraced
and extended" their data suppliers.

Imagine if MapQuest had done the same thing, harnessing their users to annotate maps and
directions, adding layers of value. It would have been much more difficult for competitors to
enter the market just by licensing the base data.

The recent introduction of Google Maps provides a living laboratory for the competition
between application vendors and their data suppliers. Google's lightweight programming
model has led to the creation of numerous value-added services in the form of mashups
that link Google Maps with other internet-accessible data sources. Paul Rademacher's
housingmaps.com, which combines Google Maps with Craigslist apartment rental and home
purchase data to create an interactive housing search tool, is the pre-eminent example of
such a mashup.

At present, these mashups are mostly innovative experiments, done by hackers. But
entrepreneurial activity follows close behind. And already, one can see that for at least one
class of developer, Google has taken the role of data source away from Navteq and inserted
themselves as a favored intermediary. We expect to see battles between data suppliers and
application vendors in the next few years, as both realize just how important certain classes
of data will become as building blocks for Web 2.0 applications.

The race is on to own certain classes of core data: location, identity, calendaring of public
events, product identifiers and namespaces. In many cases, where there is significant cost
to create the data, there may be an opportunity for an Intel Inside style play, with a single
source for the data. In others, the winner will be the company that first reaches critical
mass via user aggregation, and turns that aggregated data into a system service.

For example, in the area of identity, PayPal, Amazon's 1-click, and the millions of users of
communications systems, may all be legitimate contenders to build a network-wide identity
database. (In this regard, Google's recent attempt to use cell phone numbers as an
identifier for Gmail accounts may be a step towards embracing and extending the phone
system.) Meanwhile, startups like Sxip are exploring the potential of federated identity, in
quest of a kind of "distributed 1-click" that will provide a seamless Web 2.0 identity
subsystem. In the area of calendaring, EVDB is an attempt to build the world's largest
shared calendar via a wiki-style architecture of participation. While the jury's still out on the
success of any particular startup or approach, it's clear that standards and solutions in these
areas, effectively turning certain classes of data into reliable subsystems of the "internet
operating system", will enable the next generation of applications.
A further point must be noted with regard to data, and that is user concerns about privacy
and their rights to their own data. In many of the early web applications, copyright is only
loosely enforced. For example, Amazon lays claim to any reviews submitted to the site, but
in the absence of enforcement, people may repost the same review elsewhere. However, as
companies begin to realize that control over data may be their chief source of competitive
advantage, we may see heightened attempts at control.

Much as the rise of proprietary software led to the Free Software movement, we expect the
rise of proprietary databases to result in a Free Data movement within the next decade. One
can see early signs of this countervailing trend in open data projects such as Wikipedia, the
Creative Commons, and in software projects like Greasemonkey, which allow users to take
control of how data is displayed on their computer.

Rich User Experiences

As early as Pei Wei's Viola browser in 1992, the web was being used to deliver "applets" and
other kinds of active content within the web browser. Java's introduction in 1995 was
framed around the delivery of such applets. JavaScript and then DHTML were introduced as
lightweight ways to provide client side programmability and richer user experiences. Several
years ago, Macromedia coined the term "Rich Internet Applications" (which has also been
picked up by open source Flash competitor Laszlo Systems) to highlight the capabilities of
Flash to deliver not just multimedia content but also GUI-style application experiences.

However, the potential of the web to deliver full scale applications didn't hit the mainstream
till Google introduced Gmail, quickly followed by Google Maps, web based applications with
rich user interfaces and PC-equivalent interactivity. The collection of technologies used by
Google was christened AJAX, in a seminal essay by Jesse James Garrett of web design firm
Adaptive Path. He wrote:

"Ajax isn't a technology. It's really several technologies, each flourishing in its own right,
coming together in powerful new ways. Ajax incorporates:

• standards-based presentation using XHTML and CSS;


• dynamic display and interaction using the Document Object Model;
• data interchange and manipulation using XML and XSLT;
• asynchronous data retrieval using XMLHttpRequest;
• and JavaScript binding everything together."

AJAX is also a key component of Web 2.0 applications such as Flickr, now part of Yahoo!,
37signals' applications basecamp and backpack, as well as other Google applications such
as Gmail and Orkut. We're entering an unprecedented period of user interface innovation,
as web developers are finally able to build web applications as rich as local PC-based
applications.

Interestingly, many of the capabilities now being explored have been around for many
years. In the late '90s, both Microsoft and Netscape had a vision of the kind of capabilities
that are now finally being realized, but their battle over the standards to be used made
cross-browser applications difficult. It was only when Microsoft definitively won the browser
wars, and there was a single de-facto browser standard to write to, that this kind of
application became possible. And while Firefox has reintroduced competition to the browser
market, at least so far we haven't seen the destructive competition over web standards that
held back progress in the '90s.
We expect to see many new web applications over the next few years, both truly novel
applications, and rich web reimplementations of PC applications. Every platform change to
date has also created opportunities for a leadership change in the dominant applications of
the previous platform.

Gmail has already provided some interesting innovations in email, combining the strengths
of the web (accessible from anywhere, deep database competencies, searchability) with
user interfaces that approach PC interfaces in usability. Meanwhile, other mail clients on the
PC platform are nibbling away at the problem from the other end, adding IM and presence
capabilities. How far are we from an integrated communications client combining the best of
email, IM, and the cell phone, using VoIP to add voice capabilities to the rich capabilities of
web applications? The race is on.

It's easy to see how Web 2.0 will also remake the address book. A Web 2.0-style address
book would treat the local address book on the PC or phone merely as a cache of the
contacts you've explicitly asked the system to remember. Meanwhile, a web-based
synchronization agent, Gmail-style, would remember every message sent or received, every
email address and every phone number used, and build social networking heuristics to
decide which ones to offer up as alternatives when an answer wasn't found in the local
cache. Lacking an answer there, the system would query the broader social network.

A Web 2.0 word processor would support wiki-style collaborative editing, not just
standalone documents. But it would also support the rich formatting we've come to expect
in PC-based word processors. Writely is a good example of such an application, although it
hasn't yet gained wide traction.

Nor will the Web 2.0 revolution be limited to PC applications. Salesforce.com demonstrates
how the web can be used to deliver software as a service, in enterprise scale applications
such as CRM.

The competitive opportunity for new entrants is to fully embrace the potential of Web 2.0.
Companies that succeed will create applications that learn from their users, using an
architecture of participation to build a commanding advantage not just in the software
interface, but in the richness of the shared data.

Social network service


Not to be confused with social network.

A social network service focuses on building online communities of people who share
interests and/or activities, or who are interested in exploring the interests and activities of
others. Most social network services are web based and provide a variety of ways for users
to interact, such as e-mail and instant messaging services.

Social networking has created new ways to communicate and share information. Social
networking websites are being used regularly by millions of people, and it now seems that
social networking will be an enduring part of everyday life. The main types of social
networking services are those which contain directories of some categories (such as former
classmates), means to connect with friends (usually with self-description pages), and
recommender systems linked to trust. Popular methods now combine many of these, with
MySpace and Facebook being the most widely used in North America;[1] Nexopia (mostly in
Canada);[2] Bebo,[3] Facebook, Hi5, MySpace, Tagged, Xing;[4] and Skyrock in parts of
Europe;[5] Orkut and Hi5 in South America and Central America;[6] and Friendster, Orkut,
Xiaonei and Cyworld in Asia and the Pacific Islands.

There have been some attempts to standardize these services to avoid the need to duplicate
entries of friends and interests (see the FOAF standard and the Open Source Initiative), but
this has led to some concerns about privacy.

History of social networking services

The notion that individual computers linked electronically could form the basis of computer
mediated social interaction and networking was suggested early on [7]. There were many
early efforts to support social networks via computer-mediated communication, including
Usenet, ARPANET, LISTSERV, bulletin board services (BBS), and EIES: Murray Turoff's
server-based Electronic Information Exchange Service (Turoff and Hiltz, 1978, 1993). The
Information Routing Group developed a schema about how the proto-Internet might support
this.[8]

Early social networking websites started in the form of generalized online communities such
as Theglobe.com (1994)[9]], Geocities (1994) and Tripod (1995). These early communities
focused on bringing people together to interact with each other through chat rooms, and
share personal information and ideas around any topics via personal homepage publishing
tools which was a precursor to the blogging phenomenon. Some communities took a
different approach by simply having people link to each other via email addresses. These
sites included Classmates.com (1995), focusing on ties with former school mates, and
SixDegrees.com (1997), focusing on indirect ties. User profiles could be created, messages
sent to users held on a “friends list” and other members could be sought out who had
similar interests to yours in their profiles.[10] Whilst these features had existed in some form
before SixDegrees.com came about, this would be the first time these functions were
available in one package. Despite these new developments (that would later catch on and
become immensely popular), the website simply wasn’t profitable and eventually shut down.
[11]
It was even described by the website’s owner as "simply ahead of its time." [12] Two
different models of social networking that came about in 1999 were trust-based, developed
by Epinions.com, and friendship-based, such as those developed by Jonathan Bishop and
used on some regional UK sites between 1999 and 2001.[13] Innovations included not only
showing who is "friends" with whom, but giving users more control over content and
connectivity. Between 2002 and 2004, three social networking sites emerged as the most
popular form of these sites in the world, causing such sites to become part of mainstream
users globally. First there was Friendster (which Google tried to acquire in 2003), then,
MySpace, and finally, Bebo. By 2005, MySpace, emergent as the biggest of them all, was
reportedly getting more page views than Google. 2004 saw the emergence of Facebook, a
competitor, also rapidly growing in size.[14] In 2006, Facebook opened up to the non US
college community, and together with allowing externally-developed add-on applications,
and some applications enabled the graphing of a user's own social network - thus linking
social networks and social networking, became the largest and fastest growing site in the
world, not limited by particular geographical followings.[15]

Social networking began to flourish as a component of business internet strategy at around


March 2005 when Yahoo launched Yahoo! 360°. In July 2005 News Corporation bought
MySpace, followed by ITV (UK) buying Friends Reunited in December 2005.[16][17] Various
social networking sites have sprung up catering to different languages and countries. It is
estimated that combined there are now over 200 social networking sites using these
existing and emerging social networking models,[18] without counting the niche social
networks (also referred to as vertical social networks) made possible by services such as
Ning.[19]

Research on the social impact of social networking software

An increasing number of academic commentators are becoming interested in studying


Facebook and other social networking tools. Social science researchers have begun to
investigate what the impact of this might be on society. Typical articles have investigated
issues such as Identity[20], Privacy[21], E-learning [22], Social capital[23] and Teenage use.[24]

A special issue of the Journal for Computer-Mediated Communications was dedicated to


studies of social network sites. Included in this issue is an introduction to social network
sites.[25]

A 2008 book published by Forrester Research, Inc. titled Groundswell builds on a 2006
Forrester Report about social computing and coins the term groundswell to mean "a
spontaneous movement of people using online tools to connect, take charge of their own
experience, and get what they need-information, support, ideas, products, and bargaining
power--from each other."

Business applications

Social networks connect people at low cost; this can be beneficial for entrepreneurs and
small businesses looking to expand their contact base. These networks often act as a
customer relationship management tool for companies selling products and services.
Companies can also use social networks for advertising in the form of banners and text ads.
Since businesses operate globally, social networks can make it easier to keep in touch with
contacts around the world.

One example of social networking being used for business purposes is LinkedIn.com, which
aims to interconnect professionals. It claims to have more than 20 million registered users
from 150 different industries.

Professional networking sites function as online meeting places for business and industry
professionals. Other sites are bringing this model for niche business professional
networking.

Virtual communities for business allow individuals to be accessible. People establish their
real identity in a verifiable place. These individuals then interact with each other or within
groups that share common business interests and goals. They can also post their own user
generated content in the form of blogs, pictures, slide shows and videos. Like a social
network, the consumer essentially becomes the publisher.

A professional network is used for the business to business marketplace. These networks
improve the ability for people to advance professionally, by finding, connecting and
networking with others. Business professionals can share experiences with others who have
a need to learn from similar experiences.

The traditional way to interact is face-to-face. Interactive technology makes it possible for
people to network with their peers from anywhere, at anytime in an online environment.
Professional network services attract, aggregate and assemble large business-focused
audiences by creating informative and interactive meeting places.

Medical applications

Social networks are beginning to be adopted by healthcare professionals as a means to


manage institutional knowledge, disseminate peer to peer knowledge and to highlight
individual physicians and institutions. The advantage of using a dedicated medical social
networking site is that all the members are screened against the state licensing board list of
practitioners.[26]

The role of social networks is especially of interest to pharmaceutical companies who spend
approximately "32 percent of their marketing dollars" attempting to influence the opinion
leaders of social networks.[27]

A new trend is emerging with social networks created to help its members with various
physical and mental ailments. For people suffering from life altering diseases,
PatientsLikeMe offers its members the chance to connect with others dealing with similar
issues and research patient data related to their condition. For alcoholics and addicts,
SoberCircle gives people in recovery the ability to communicate with one another and
strengthen their recovery through the encouragement of others who can relate to their
situation. Daily strength is also a website that offers support groups for a wide array of
topics and conditions, including the support topics offered by PatientsLikeMe and
SoberCircle.

Social networks for social good

Several websites are beginning to tap into the power of the social networking model for
social good. Such models may be highly successful for connecting otherwise fragmented
industries and small organizations without the resources to reach a broader audience with
interested and passionate users. Users benefit by interacting with a like minded community
and finding a channel for their energy and giving. [28] Examples include SixDegrees.org,
TakingITGlobal, G21.com, BabelUp, Care2, Change.org, Gather.org, Idealist.org,
OneWorldWiki, TakePart.com and Network for Good. The charity badge is often used within
the above context.

Pros of Social networking applications

- CMC can have a positive effect on student/teacher communication which can lead to
positive student outcomes. The use of emoticons enables the relationship between teachers
and students to become more personal. [29]

- Business decision makers are now preferring communication channels that are two-way
dialogs, channels that resemble social networking applications. This is a great way for
businesses to advertise their product. It is also a way that has proved to be more effective
than the previous “word of mouth” influence.[30]

- Social networking allows us to identify and connect with friends and strangers while on the
go. Such computer mediated communication also allows us to reconnect with friends from
the past whom we may have lost contact with. [31]
- LinkedIn is a sns (social networking site) particularly used by jobseekers. It is a tool used
to link users to people they may have worked with in the past through various jobs or
institutions. Users also have the opportunity to link to certain companies they aspire to work
with. [32]

Cons of Social Networking Applications

On the contrary, not all networking applications used in the professional environment are
beneficial or successful. Some prospects experience trouble while trying to build their
networks, thus they may produce ineffective work. [33] Employees are now more likely than
before to carry on inappropriate conversations at work. Communicating with such
technologies creates a relaxed feeling in a professional environment. Some messages that
should be relayed in person are being sent through the computer; the nature of the
message and the audience should dictate the medium used to transmit the message. [34] For
example, the ability to network with 100 people will not improve communication skills when
in contact with them. [35]

Typical structure of a social networking service

Basics

In general, social networking services allow users to create a profile for themselves, and can
be broken down into two broad categories: internal social networking (ISN) [36] and external
social networking (ESN)[37] sites, such as Orkut,MySpace, Facebook and Bebo. Both types
can increase the feeling of community among people. An ISN is a closed/private community
that consists of a group of people within a company, association, society, education provider
and organization or even an "invite only" group created by a user in an ESN. An ESN is
open/public and available to all web users to communicate and are designed to attract
advertisers. ESN's can be smaller specialised communities (i.e. linked by a single common
interest eg TheSocialGolfer, ACountryLife.Com, Great Cooks Community) or they can be
large generic social networking sites (eg MySpace, Facebook etc).

However, whether specialised or generic there is commonality across the general approach
of social networking sites. Users can upload a picture of themselves, create their 'profile'
and can often be "friends" with other users. In most social networking services, both users
must confirm that they are friends before they are linked. For example, if Alice lists Bob as a
friend, then Bob would have to approve Alice's friend request before they are listed as
friends. Some social networking sites have a "favorites" feature that does not need approval
from the other user. Social networks usually have privacy controls that allows the user to
choose who can view their profile or contact them, etc.

Some social networking sites are created for the benefits of others, such as parents social
networking site "Gurgle". This website is for parents to talk about pregnancy, birth and
bringing up children.

Several social networks in Asian markets such as India, China, Japan and Korea have
reached not only a high usage but also a high level of profitability. Services such as QQ
(China), Mixi (Japan), Cyworld (Korea) or the mobile-focused service Mobile Game Town by
the company DeNA in Japan (which has over 10 million users) are all profitable, setting
them apart from their western counterparts.[38]
Social status

The social status of an individual is revealed on social networks. Sociologist Erving Goffman
refers to the “Interaction Order” which he claims is the “part of the social life where face-to-
face and spoken interactions occur” (Rhiengold: 2002, P171). He believes that the way
people represent themselves provides other users information about them they want others
to believe, while concealing the rest. Goffman believes that people also give off “information
leaking true but uncontrolled information along with their more deliberate performances”
(Rheingold: 2002, P171). Through social networks people are now able to completely
control the information provided about themselves through the photos they include, the
information provided, whether it be true or false and the friends they make. People are
therefore now able to control their personal information and their desired social status.

Additional features

Some social networks have additional features, such as the ability to create groups that
share common interests or affiliations, upload or stream live videos, and hold discussions in
forums. Geosocial networking co-opts internet mapping services to organize user
participation around geographic features and their attributes.

There is also a trend for more interoperability between social networks led by technologies
such as OpenID and OpenSocial.

Lately, mobile social networking has become popular. In most mobile communities, mobile
phone users can now create their own profiles, make friends, participate in chat rooms,
create chat rooms, hold private conversations, share photos and videos, and share blogs by
using their mobile phone. Mobile phone users are basically open to every option that
someone sitting on the computer has. Some companies provide wireless services which
allow their customers to build their own mobile community and brand it, but one of the
most popular wireless services for social networking in North America is Facebook Mobile.
Other companies provide new innovative features which extend the social networking
experience into the real world.

Business model

Few social networks currently charge money for membership. In part, this may be because
social networking is a relatively new service, and the value of using them has not been
firmly established in customers' minds.[citation needed] Companies such as MySpace and
Facebook sell online advertising on their site. Hence, they are seeking large memberships,
and charging for membership would be counterproductive.[39] Some believe that the deeper
information that the sites have on each user will allow much better targeted advertising
than any other site can currently provide.[40]

Social networks operate under an autonomous business model, in which a social network's
members serve dual roles as both the suppliers and the consumers of content. This is in
contrast to a traditional business model, where the suppliers and consumers are distinct
agents. Revenue is typically gained in the autonomous business model via advertisements,
but subscription-based revenue is possible when membership and content levels are
sufficiently high.[41]
Privacy

On large social networking services, there have been growing concerns about users giving
out too much personal information and the threat of sexual predators. Users of these
services need to be aware of data theft or viruses. However, large services, such as
MySpace, often work with law enforcement to try to prevent such incidents.

In addition, there is a perceived privacy threat in relation to placing too much personal
information in the hands of large corporations or governmental bodies, allowing a profile to
be produced on an individual's behavior on which decisions, detrimental to an individual,
may be taken.

Furthermore, there is an issue over the control of data—information having been altered or
removed by the user may in fact be retained and/or passed to 3rd parties. This danger was
highlighted when the controversial social networking site Quechup harvested e-mail
addresses from users' e-mail accounts for use in a spamming operation.[42]

In medical and scientific research, asking subjects for information about their behaviors is
normally strictly scrutinized by institutional review boards, for example, to ensure that
adolescents and their parents have informed consent. It is not clear whether the same rules
apply to researchers who collect data from social networking sites. These sites often contain
a great deal of data that is hard to obtain via traditional means. Even though the data are
public, republishing it in a research paper might be considered invasion of privacy.[43]

Notifications on social networking websites

There has been a trend for social networking sites to send out only 'positive' notifications to
users. For example sites such as Bebo, Facebook, and Myspace will not send notifications to
users when they are removed from a person's friends list. Similarly Bebo will send out a
notification if a user is moved to the top of another user's friends list but no notification is
sent if they are moved down the list.

This allows users to purge undesirables from their list extremely easily and often without
confrontation since a user will rarely notice if one person disappears from their friends list.
It also enforces the general positive atmosphere of the website without drawing attention to
unpleasant happenings such as friends falling out, rejection and failed relationships.

Investigations

Social network services are increasingly being used in legal and criminal investigations.
Information posted on sites such as MySpace and Facebook has been used by police,
probation, and university officials to prosecute users of said sites. In some situations,
content posted on MySpace has been used in court.[44]

Facebook is increasingly being used by school administrations and law enforcement


agencies as a source of evidence against student users. The site, the number one online
destination for college students, allows users to create profile pages with personal details.
These pages can be viewed by other registered users from the same school which often
include resident assistants and campus police who have signed-up for the service. [45] It has
recently been revealed that some UK police forces are using social network services such as
Facebook to help their crack down on knife and gun crime. It is believed that up to 400
users of Facebook have been arrested as a result of searches of this site revealing users
posing with dangerous weapons.[citation needed]

Potential for misuse

The relative freedom afforded by social networking services has caused concern regarding
the potential of its misuse by individual patrons. In October 2006, a fake Myspace profile
created in the name of Josh Evans by Lori Janine Drew led to the suicide of Megan Meier.[46]
The event incited global concern regarding the use of social networking services for bullying
purposes.

In July 2008, a Briton, Grant Raphael, was ordered to pay a total of GBP £22,000 (about
USD $44,000) for libel and breach of privacy. Raphael had posted a fake page on Facebook
purporting to be that of a former schoolfriend Matthew Firsht, with whom Raphael had fallen
out in 2000. The page falsely claimed that Firsht was homosexual and that he was
dishonest.

At the same, genuine use of social networking services has been treated with suspicion on
the ground of the services' misuse. In September 2008, the profile of Australian Facebook
user Elmo Keep was banned by the site's administrators on the grounds that it violated the
site's terms of use. Keep is one of several users of Facebook who were banned from the site
on the presumption that their names aren't real, as they bear resemblance the names of
characters like Sesame Street's Elmo.[47] The misuse of social networking services has led
many[who?] to cast doubt over whether any information on these services can in fact be
regarded as true[citation needed].

You might also like