2DaMax Marketing | (646) 762-7511
Contact Us Today!
(646) 762-7511
954 Lexington Ave New York, NY 10021
https://sites.google.com/site/newyorkdigitalmarketingagency/
https://www.2damaxmarketing.com/new-york-city-marketing
Search engine optimization (SEO) is the process of growing the quality and quantity of website traffic by increasing the visibility of a website or a web page to users of a web search engine.[1] SEO refers to the improvement of unpaid results (known as "natural" or "organic" results) and excludes direct traffic and the purchase of paid placement.
Additionally, it may target different kinds of searches, including image search, video search, academic search,[2] news search, and industry-specific vertical search engines.
Promoting a site to increase the number of backlinks, or inbound links, is another SEO tactic.
By May 2015, mobile search had surpassed desktop search.[3] As an Internet marketing strategy, SEO considers how search engines work, the computer-programmed algorithms that dictate search engine behavior, what people search for, the actual search terms or keywords typed into search engines, and which search engines are preferred by their targeted audience.
SEO is performed because a website will receive more visitors from a search engine when website ranks are higher in the search engine results page (SERP).
These visitors can then be converted into customers.[4] SEO differs from local Search engine optimization in that the latter is focused on optimizing a business' online presence so that its web pages will be displayed by search engines when a user enters a local search for its products or services.
The former instead is more focused on national or international searches.
Webmasters and content providers began optimizing websites for search engines in the mid-1990s, as the first search engines were cataloging the early Web.
Initially, all webmasters only needed to submit the address of a page, or URL, to the various engines which would send a web crawler to crawl that page, extract links to other pages from it, and return information found on the page to be indexed.[5] The process involves a search engine spider downloading a page and storing it on the search engine's own server.
A second program, known as an indexer, extracts information about the page, such as the words it contains, where they are located, and any weight for specific words, as well as all links the page contains.
All of this information is then placed into a scheduler for crawling at a later date.
Website owners recognized the value of a high ranking and visibility in search engine results,[6] creating an opportunity for both white hat and black hat SEO practitioners.
According to industry analyst Danny Sullivan, the phrase "Search engine optimization" probably came into use in 1997.
Sullivan credits Bruce Clay as one of the first people to popularize the term.[7] On May 2, 2007,[8] Jason Gambert attempted to trademark the term SEO by convincing the Trademark Office in Arizona[9] that SEO is a "process" involving manipulation of keywords and not a "marketing service." Early versions of search algorithms relied on webmaster-provided information such as the keyword meta tag or index files in engines like ALIWEB.
Meta tags provide a guide to each page's content.
Using metadata to index pages was found to be less than reliable, however, because the webmaster's choice of keywords in the meta tag could potentially be an inaccurate representation of the site's actual content.
Inaccurate, incomplete, and inconsistent data in meta tags could and did cause pages to rank for irrelevant searches.[10][dubious – discuss] Web content providers also manipulated some attributes within the HTML source of a page in an attempt to rank well in search engines.[11] By 1997, search engine designers recognized that webmasters were making efforts to rank well in their search engine, and that some webmasters were even manipulating their rankings in search results by stuffing pages with excessive or irrelevant keywords.
Early search engines, such as Altavista and Infoseek, adjusted their algorithms to prevent webmasters from manipulating rankings.[12] By relying so much on factors such as keyword density which were exclusively within a webmaster's control, early search engines suffered from abuse and ranking manipulation.
To provide better results to their users, search engines had to adapt to ensure their results pages showed the most relevant search results, rather than unrelated pages stuffed with numerous keywords by unscrupulous webmasters.
This meant moving away from heavy reliance on term density to a more holistic process for scoring semantic signals.[13] Since the success and popularity of a search engine is determined by its ability to produce the most relevant results to any given search, poor quality or irrelevant search results could lead users to find other search sources.
Search engines responded by developing more complex ranking algorithms, taking into account additional factors that were more difficult for webmasters to manipulate.
In 2005, an annual conference, AIRWeb (Adversarial Information Retrieval on the Web), was created to bring together practitioners and researchers concerned with Search engine optimization and related topics.[14] Companies that employ overly aggressive techniques can get their client websites banned from the search results.
In 2005, the Wall Street Journal reported on a company, Traffic Power, which allegedly used high-risk techniques and failed to disclose those risks to its clients.[15] Wired magazine reported that the same company sued blogger and SEO Aaron Wall for writing about the ban.[16] Google's Matt Cutts later confirmed that Google did in fact ban Traffic Power and some of its clients.[17] Some search engines have also reached out to the SEO industry, and are frequent sponsors and guests at SEO conferences, webchats, and seminars.
Major search engines provide information and guidelines to help with website optimization.[18][19] Google has a Sitemaps program to help webmasters learn if Google is having any problems indexing their website and also provides data on Google traffic to the website.[20] Bing Webmaster Tools provides a way for webmasters to submit a sitemap and web feeds, allows users to determine the "crawl rate", and track the web pages index status.
In 2015, it was reported that Google was developing and promoting mobile search as a key feature within future products.
In response, many brands began to take a different approach to their Internet marketing strategies.[21] In 1998, two graduate students at Stanford University, Larry Page and Sergey Brin, developed "Backrub", a search engine that relied on a mathematical algorithm to rate the prominence of web pages.
The number calculated by the algorithm, PageRank, is a function of the quantity and strength of inbound links.[22] PageRank estimates the likelihood that a given page will be reached by a web user who randomly surfs the web, and follows links from one page to another.
In effect, this means that some links are stronger than others, as a higher PageRank page is more likely to be reached by the random web surfer.
Page and Brin founded Google in 1998.[23] Google attracted a loyal following among the growing number of Internet users, who liked its simple design.[24] Off-page factors (such as PageRank and hyperlink analysis) were considered as well as on-page factors (such as keyword frequency, meta tags, headings, links and site structure) to enable Google to avoid the kind of manipulation seen in search engines that only considered on-page factors for their rankings.
Although PageRank was more difficult to game, webmasters had already developed link building tools and schemes to influence the Inktomi search engine, and these methods proved similarly applicable to gaming PageRank.
Many sites focused on exchanging, buying, and selling links, often on a massive scale.
Some of these schemes, or link farms, involved the creation of thousands of sites for the sole purpose of link spamming.[25] By 2004, search engines had incorporated a wide range of undisclosed factors in their ranking algorithms to reduce the impact of link manipulation.
In June 2007, The New York Times' Saul Hansell stated Google ranks sites using more than 200 different signals.[26] The leading search engines, Google, Bing, and Yahoo, do not disclose the algorithms they use to rank pages.
Some SEO practitioners have studied different approaches to Search engine optimization, and have shared their personal opinions.[27] Patents related to search engines can provide information to better understand search engines.[28] In 2005, Google began personalizing search results for each user.
Depending on their history of previous searches, Google crafted results for logged in users.[29] In 2007, Google announced a campaign against paid links that transfer PageRank.[30] On June 15, 2009, Google disclosed that they had taken measures to mitigate the effects of PageRank sculpting by use of the nofollow attribute on links.
Matt Cutts, a well-known software engineer at Google, announced that Google Bot would no longer treat any nofollow links, in the same way, to prevent SEO service providers from using nofollow for PageRank sculpting.[31] As a result of this change the usage of nofollow led to evaporation of PageRank.
In order to avoid the above, SEO engineers developed alternative techniques that replace nofollowed tags with obfuscated JavaScript and thus permit PageRank sculpting.
Additionally several solutions have been suggested that include the usage of iframes, Flash and JavaScript.[32] In December 2009, Google announced it would be using the web search history of all its users in order to populate search results.[33] On June 8, 2010 a new web indexing system called Google Caffeine was announced.
Designed to allow users to find news results, forum posts and other content much sooner after publishing than before, Google Caffeine was a change to the way Google updated its index in order to make things show up quicker on Google than before.
According to Carrie Grimes, the software engineer who announced Caffeine for Google, "Caffeine provides 50 percent fresher results for web searches than our last index..."[34] Google Instant, real-time-search, was introduced in late 2010 in an attempt to make search results more timely and relevant.
Historically site administrators have spent months or even years optimizing a website to increase search rankings.
With the growth in popularity of social media sites and blogs the leading engines made changes to their algorithms to allow fresh content to rank quickly within the search results.[35] In February 2011, Google announced the Panda update, which penalizes websites containing content duplicated from other websites and sources.
Historically websites have copied content from one another and benefited in search engine rankings by engaging in this practice.
However, Google implemented a new system which punishes sites whose content is not unique.[36] The 2012 Google Penguin attempted to penalize websites that used manipulative techniques to improve their rankings on the search engine.[37] Although Google Penguin has been presented as an algorithm aimed at fighting web spam, it really focuses on spammy links[38] by gauging the quality of the sites the links are coming from.
The 2013 Google Hummingbird update featured an algorithm change designed to improve Google's natural language processing and semantic understanding of web pages.
Hummingbird's language processing system falls under the newly recognized term of "conversational search" where the system pays more attention to each word in the query in order to better match the pages to the meaning of the query rather than a few words.[39] With regards to the changes made to Search engine optimization, for content publishers and writers, Hummingbird is intended to resolve issues by getting rid of irrelevant content and spam, allowing Google to produce high-quality content and rely on them to be 'trusted' authors.
In October 2019, Google announced they would start applying BERT models for English language search queries in the US.
Bidirectional Encoder Representations from Transformers (BERT) was another attempt by Google to improve their natural language processing but this time in order to better understand the search queries of their users.[40] In terms of Search engine optimization, BERT intended to connect users more easily to relevant content and increase the quality of traffic coming to websites that are ranking in the Search Engine Results Page.
The leading search engines, such as Google, Bing and Yahoo!, use crawlers to find pages for their algorithmic search results.
Pages that are linked from other search engine indexed pages do not need to be submitted because they are found automatically.
The Yahoo! Directory and DMOZ, two major directories which closed in 2014 and 2017 respectively, both required manual submission and human editorial review.[41] Google offers Google Search Console, for which an XML Sitemap feed can be created and submitted for free to ensure that all pages are found, especially pages that are not discoverable by automatically following links[42] in addition to their URL submission console.[43] Yahoo! formerly operated a paid submission service that guaranteed crawling for a cost per click;[44] however, this practice was discontinued in 2009.
Search engine crawlers may look at a number of different factors when crawling a site.
Not every page is indexed by the search engines.
The distance of pages from the root directory of a site may also be a factor in whether or not pages get crawled.[45] Today, most people are searching on Google using a mobile device.[46] In November 2016, Google announced a major change to the way crawling websites and started to make their index mobile-first, which means the mobile version of a given website becomes the starting point for what Google includes in their index.[47] In May 2019, Google updated the rendering engine of their crawler to be the latest version of Chromium (74 at the time of the announcement).
Google indicated that they would regularly update the Chromium rendering engine to the latest version.[48] In December 2019, Google began updating the User-Agent string of their crawler to reflect the latest Chrome version used by their rendering service.
The delay was to allow webmasters time to update their code that responded to particular bot User-Agent strings.
Google ran evaluations and felt confident the impact would be minor.[49] To avoid undesirable content in the search indexes, webmasters can instruct spiders not to crawl certain files or directories through the standard robots.txt file in the root directory of the domain.
Additionally, a page can be explicitly excluded from a search engine's database by using a meta tag specific to robots (usually <meta name="robots" content="noindex"> ).
When a search engine visits a site, the robots.txt located in the root directory is the first file crawled.
The robots.txt file is then parsed and will instruct the robot as to which pages are not to be crawled.
As a search engine crawler may keep a cached copy of this file, it may on occasion crawl pages a webmaster does not wish crawled.
Pages typically prevented from being crawled include login specific pages such as shopping carts and user-specific content such as search results from internal searches.
In March 2007, Google warned webmasters that they should prevent indexing of internal search results because those pages are considered search spam.[50] A variety of methods can increase the prominence of a webpage within the search results.
Cross linking between pages of the same website to provide more links to important pages may improve its visibility.[51] Writing content that includes frequently searched keyword phrase, so as to be relevant to a wide variety of search queries will tend to increase traffic.[51] Updating content so as to keep search engines crawling back frequently can give additional weight to a site.
Adding relevant keywords to a web page's metadata, including the title tag and meta description, will tend to improve the relevancy of a site's search listings, thus increasing traffic.
URL canonicalization of web pages accessible via multiple URLs, using the canonical link element[52] or via 301 redirects can help make sure links to different versions of the URL all count towards the page's link popularity score.
SEO techniques can be classified into two broad categories: techniques that search engine companies recommend as part of good design ("white hat"), and those techniques of which search engines do not approve ("black hat").
The search engines attempt to minimize the effect of the latter, among them spamdexing.
Industry commentators have classified these methods, and the practitioners who employ them, as either white hat SEO, or black hat SEO.[53] White hats tend to produce results that last a long time, whereas black hats anticipate that their sites may eventually be banned either temporarily or permanently once the search engines discover what they are doing.[54] An SEO technique is considered white hat if it conforms to the search engines' guidelines and involves no deception.
As the search engine guidelines[18][19][55] are not written as a series of rules or commandments, this is an important distinction to note.
White hat SEO is not just about following guidelines but is about ensuring that the content a search engine indexes and subsequently ranks is the same content a user will see.
White hat advice is generally summed up as creating content for users, not for search engines, and then making that content easily accessible to the online "spider" algorithms, rather than attempting to trick the algorithm from its intended purpose.
White hat SEO is in many ways similar to web development that promotes accessibility,[56] although the two are not identical.
Black hat SEO attempts to improve rankings in ways that are disapproved of by the search engines, or involve deception.
One black hat technique uses hidden text, either as text colored similar to the background, in an invisible div, or positioned off screen.
Another method gives a different page depending on whether the page is being requested by a human visitor or a search engine, a technique known as cloaking.
Another category sometimes used is grey hat SEO.
This is in between black hat and white hat approaches, where the methods employed avoid the site being penalized but do not act in producing the best content for users.
Grey hat SEO is entirely focused on improving search engine rankings.
Search engines may penalize sites they discover using black or grey hat methods, either by reducing their rankings or eliminating their listings from their databases altogether.
Such penalties can be applied either automatically by the search engines' algorithms, or by a manual site review.
One example was the February 2006 Google removal of both BMW Germany and Ricoh Germany for use of deceptive practices.[57] Both companies, however, quickly apologized, fixed the offending pages, and were restored to Google's search engine results page.[58] SEO is not an appropriate strategy for every website, and other Internet marketing strategies can be more effective, such as paid advertising through pay per click (PPC) campaigns, depending on the site operator's goals.
Search engine marketing (SEM) is the practice of designing, running and optimizing search engine ad campaigns.[59] Its difference from SEO is most simply depicted as the difference between paid and unpaid priority ranking in search results.
Its purpose regards prominence more so than relevance; website developers should regard SEM with the utmost importance with consideration to visibility as most navigate to the primary listings of their search.[60] A successful Internet marketing campaign may also depend upon building high quality web pages to engage and persuade, setting up analytics programs to enable site owners to measure results, and improving a site's conversion rate.[61] In November 2015, Google released a full 160 page version of its Search Quality Rating Guidelines to the public,[62] which revealed a shift in their focus towards "usefulness" and mobile search.
In recent years the mobile market has exploded, overtaking the use of desktops, as shown in by StatCounter in October 2016 where they analyzed 2.5 million websites and found that 51.3% of the pages were loaded by a mobile device.[63] Google has been one of the companies that are utilizing the popularity of mobile usage by encouraging websites to use their Google Search Console, the Mobile-Friendly Test, which allows companies to measure up their website to the search engine results and how user-friendly it is.
SEO may generate an adequate return on investment.
However, search engines are not paid for organic search traffic, their algorithms change, and there are no guarantees of continued referrals.
Due to this lack of guarantees and certainty, a business that relies heavily on search engine traffic can suffer major losses if the search engines stop sending visitors.[64] Search engines can change their algorithms, impacting a website's placement, possibly resulting in a serious loss of traffic.
According to Google's CEO, Eric Schmidt, in 2010, Google made over 500 algorithm changes – almost 1.5 per day.[65] It is considered a wise business practice for website operators to liberate themselves from dependence on search engine traffic.[66] In addition to accessibility in terms of web crawlers (addressed above), user web accessibility has become increasingly important for SEO.
Optimization techniques are highly tuned to the dominant search engines in the target market.
The search engines' market shares vary from market to market, as does competition.
In 2003, Danny Sullivan stated that Google represented about 75% of all searches.[67] In markets outside the United States, Google's share is often larger, and Google remains the dominant search engine worldwide as of 2007.[68] As of 2006, Google had an 85–90% market share in Germany.[69] While there were hundreds of SEO firms in the US at that time, there were only about five in Germany.[69] As of June 2008, the market share of Google in the UK was close to 90% according to Hitwise.[70] That market share is achieved in a number of countries.
As of 2009, there are only a few large markets where Google is not the leading search engine.
In most cases, when Google is not leading in a given market, it is lagging behind a local player.
The most notable example markets are China, Japan, South Korea, Russia and the Czech Republic where respectively Baidu, Yahoo! Japan, Naver, Yandex and Seznam are market leaders.
Successful search optimization for international markets may require professional translation of web pages, registration of a domain name with a top level domain in the target market, and web hosting that provides a local IP address.
Otherwise, the fundamental elements of search optimization are essentially the same, regardless of language.[69] On October 17, 2002, SearchKing filed suit in the United States District Court, Western District of Oklahoma, against the search engine Google.
SearchKing's claim was that Google's tactics to prevent spamdexing constituted a tortious interference with contractual relations.
On May 27, 2003, the court granted Google's motion to dismiss the complaint because SearchKing "failed to state a claim upon which relief may be granted."[71][72] In March 2006, KinderStart filed a lawsuit against Google over search engine rankings.
KinderStart's website was removed from Google's index prior to the lawsuit, and the amount of traffic to the site dropped by 70%.
On March 16, 2007, the United States District Court for the Northern District of California (San Jose Division) dismissed KinderStart's complaint without leave to amend, and partially granted Google's motion for Rule 11 sanctions against KinderStart's attorney, requiring him to pay part of Google's legal expenses.[73][74]
Search engine marketing (SEM) is a form of Internet marketing that involves the promotion of websites by increasing their visibility in search engine results pages (SERPs) primarily through paid advertising.[1] SEM may incorporate search engine optimization (SEO), which adjusts or rewrites website content and site architecture to achieve a higher ranking in search engine results pages to enhance pay per click (PPC) listings.[2] In 2007, U.S.
advertisers spent US $24.6 billion on Search engine marketing.[3] In Q2 2015, Google (73.7%) and the Yahoo/Bing (26.3%) partnership accounted for almost 100% of U.S.
search engine spend.[4] As of 2006, SEM was growing much faster than traditional advertising and even other channels of online marketing.[5] Managing search campaigns is either done directly with the SEM vendor or through an SEM tool provider.
It may also be self-serve or through an advertising agency.
As of October 2016, Google leads the global search engine market with a market share of 89.3%.
Bing comes second with a market share of 4.36%, Yahoo comes third with a market share of 3.3%, and Chinese search engine Baidu is fourth globally with a share of about 0.68%.[6] As the number of sites on the Web increased in the mid-to-late 1990s, search engines started appearing to help people find information quickly.
Search engines developed business models to finance their services, such as pay per click programs offered by Open Text[7] in 1996 and then Goto.com[8] in 1998.
Goto.com later changed its name[9] to Overture in 2001, was purchased by Yahoo! in 2003, and now offers paid search opportunities for advertisers through Yahoo! Search Marketing.
Google also began to offer advertisements on search results pages in 2000 through the Google AdWords program.
By 2007, pay-per-click programs proved to be primary moneymakers[10] for search engines.
In a market dominated by Google, in 2009 Yahoo! and Microsoft announced the intention to forge an alliance.
The Yahoo! & Microsoft Search Alliance eventually received approval from regulators in the US and Europe in February 2010.[11] Search engine optimization consultants expanded their offerings to help businesses learn about and use the advertising opportunities offered by search engines, and new agencies focusing primarily upon marketing and advertising through search engines emerged.
The term "Search engine marketing" was popularized by Danny Sullivan in 2001[12] to cover the spectrum of activities involved in performing SEO, managing paid listings at the search engines, submitting sites to directories, and developing online marketing strategies for businesses, organizations, and individuals.
Search engine marketing uses at least five methods and metrics to optimize websites.[citation needed] Search engine marketing is a way to create and edit a website so that search engines rank it higher than other pages.
It should be also focused on keyword marketing or pay-per-click advertising (PPC).
The technology enables advertisers to bid on specific keywords or phrases and ensures ads appear with the results of search engines.
With the development of this system, the price is growing under a high level of competition.
Many advertisers prefer to expand their activities, including increasing search engines and adding more keywords.
The more advertisers are willing to pay for clicks, the higher the ranking for advertising, which leads to higher traffic.[15] PPC comes at a cost.
The higher position is likely to cost $5 for a given keyword, and $4.50 for a third location.
A third advertiser earns 10% less than the top advertiser while reducing traffic by 50%.[15] Investors must consider their return on investment when engaging in PPC campaigns.
Buying traffic via PPC will deliver a positive ROI when the total cost-per-click for a single conversion remains below the profit margin.
That way the amount of money spent to generate revenue is below the actual revenue generated.
There are many reasons explaining why advertisers choose the SEM strategy.
First, creating a SEM account is easy and can build traffic quickly based on the degree of competition.
The shopper who uses the search engine to find information tends to trust and focus on the links showed in the results pages.
However, a large number of online sellers do not buy search engine optimization to obtain higher ranking lists of search results but prefer paid links.
A growing number of online publishers are allowing search engines such as Google to crawl content on their pages and place relevant ads on it.[16] From an online seller's point of view, this is an extension of the payment settlement and an additional incentive to invest in paid advertising projects.
Therefore, it is virtually impossible for advertisers with limited budgets to maintain the highest rankings in the increasingly competitive search market.
Google's Search engine marketing is one of the western world's marketing leaders, while its Search engine marketing is its biggest source of profit.[17] Google's search engine providers are clearly ahead of the Yahoo and Bing network.
The display of unknown search results is free, while advertisers are willing to pay for each click of the ad in the sponsored search results.
Paid inclusion involves a search engine company charging fees for the inclusion of a website in their results pages.
Also known as sponsored listings, paid inclusion products are provided by most search engine companies either in the main results area or as a separately identified advertising area.
The fee structure is both a filter against superfluous submissions and a revenue generator.
Typically, the fee covers an annual subscription for one webpage, which will automatically be catalogued on a regular basis.
However, some companies are experimenting with non-subscription based fee structures where purchased listings are displayed permanently.
A per-click fee may also apply.
Each search engine is different.
Some sites allow only paid inclusion, although these have had little success.
More frequently, many search engines, like Yahoo!,[18] mix paid inclusion (per-page and per-click fee) with results from web crawling.
Others, like Google (and as of 2006, Ask.com[19][20]), do not let webmasters pay to be in their search engine listing (advertisements are shown separately and labeled as such).
Some detractors of paid inclusion allege that it causes searches to return results based more on the economic standing of the interests of a web site, and less on the relevancy of that site to end-users.
Often the line between pay per click advertising and paid inclusion is debatable.
Some have lobbied for any paid listings to be labeled as an advertisement, while defenders insist they are not actually ads since the webmasters do not control the content of the listing, its ranking, or even whether it is shown to any users.
Another advantage of paid inclusion is that it allows site owners to specify particular schedules for crawling pages.
In the general case, one has no control as to when their page will be crawled or added to a search engine index.
Paid inclusion proves to be particularly useful for cases where pages are dynamically generated and frequently modified.
Paid inclusion is a Search engine marketing method in itself, but also a tool of search engine optimization since experts and firms can test out different approaches to improving ranking and see the results often within a couple of days, instead of waiting weeks or months.
Knowledge gained this way can be used to optimize other web pages, without paying the search engine company.
SEM is the wider discipline that incorporates SEO.
SEM includes both paid search results (using tools like Google Adwords or Bing Ads, formerly known as Microsoft adCenter) and organic search results (SEO).
SEM uses paid advertising with AdWords or Bing Ads, pay per click (particularly beneficial for local providers as it enables potential consumers to contact a company directly with one click), article submissions, advertising and making sure SEO has been done.
A keyword analysis is performed for both SEO and SEM, but not necessarily at the same time.
SEM and SEO both need to be monitored and updated frequently to reflect evolving best practices.
In some contexts, the term SEM is used exclusively to mean pay per click advertising,[2] particularly in the commercial advertising and marketing communities which have a vested interest in this narrow definition.
Such usage excludes the wider search marketing community that is engaged in other forms of SEM such as search engine optimization and search retargeting.
Creating the link between SEO and PPC represents an integral part of the SEM concept.
Sometimes, especially when separate teams work on SEO and PPC and the efforts are not synced, positive results of aligning their strategies can be lost.
The aim of both SEO and PPC is maximizing the visibility in search and thus, their actions to achieve it should be centrally coordinated.
Both teams can benefit from setting shared goals and combined metrics, evaluating data together to determine future strategy or discuss which of the tools works better to get the traffic for selected keywords in the national and local search results.
Thanks to this, the search visibility can be increased along with optimizing both conversions and costs.[21] Another part of SEM is social media marketing (SMM).
SMM is a type of marketing that involves exploiting social media to influence consumers that one company’s products and/or services are valuable.[22] Some of the latest theoretical advances include Search engine marketing management (SEMM).
SEMM relates to activities including SEO but focuses on return on investment (ROI) management instead of relevant traffic building (as is the case of mainstream SEO).
SEMM also integrates organic SEO, trying to achieve top ranking without using paid means to achieve it, and pay per click SEO.
For example, some of the attention is placed on the web page layout design and how content and information is displayed to the website visitor.
SEO & SEM are two pillars of one marketing job and they both run side by side to produce much better results than focusing on only one pillar.
Paid search advertising has not been without controversy and the issue of how search engines present advertising on their search result pages has been the target of a series of studies and reports[23][24][25] by Consumer Reports WebWatch.
The Federal Trade Commission (FTC) also issued a letter[26] in 2002 about the importance of disclosure of paid advertising on search engines, in response to a complaint from Commercial Alert, a consumer advocacy group with ties to Ralph Nader.
Another ethical controversy associated with search marketing has been the issue of trademark infringement.
The debate as to whether third parties should have the right to bid on their competitors' brand names has been underway for years.
In 2009 Google changed their policy, which formerly prohibited these tactics, allowing 3rd parties to bid on branded terms as long as their landing page in fact provides information on the trademarked term.[27] Though the policy has been changed this continues to be a source of heated debate.[28] On April 24, 2012, many started to see that Google has started to penalize companies that are buying links for the purpose of passing off the rank.
The Google Update was called Penguin.
Since then, there have been several different Penguin/Panda updates rolled out by Google.
SEM has, however, nothing to do with link buying and focuses on organic SEO and PPC management.
As of October 20, 2014, Google had released three official revisions of their Penguin Update.
In 2013, the Tenth Circuit Court of Appeals held in Lens.com, Inc.
v.
1-800 Contacts, Inc.
that online contact lens seller Lens.com did not commit trademark infringement when it purchased search advertisements using competitor 1-800 Contacts' federally registered 1800 CONTACTS trademark as a keyword.
In August 2016, the Federal Trade Commission filed an administrative complaint against 1-800 Contacts alleging, among other things, that its trademark enforcement practices in the Search engine marketing space have unreasonably restrained competition in violation of the FTC Act.
1-800 Contacts has denied all wrongdoing and appeared before an FTC administrative law judge in April 2017.[29] AdWords is recognized as a web-based advertising utensil since it adopts keywords that can deliver adverts explicitly to web users looking for information in respect to a certain product or service.
It is flexible and provides customizable options like Ad Extensions, access to non-search sites, leveraging the display network to help increase brand awareness.
The project hinges on cost per click (CPC) pricing where the maximum cost per day for the campaign can be chosen, thus the payment of the service only applies if the advert has been clicked.
SEM companies have embarked on AdWords projects as a way to publicize their SEM and SEO services.
One of the most successful approaches to the strategy of this project was to focus on making sure that PPC advertising funds were prudently invested.
Moreover, SEM companies have described AdWords as a practical tool for increasing a consumer’s investment earnings on Internet advertising.
The use of conversion tracking and Google Analytics tools was deemed to be practical for presenting to clients the performance of their canvas from click to conversion.
AdWords project has enabled SEM companies to train their clients on the utensil and delivers better performance to the canvass.
The assistance of AdWord canvass could contribute to the growth of web traffic for a number of its consumer’s websites, by as much as 250% in only nine months.[30] Another way Search engine marketing is managed is by contextual advertising.
Here marketers place ads on other sites or portals that carry information relevant to their products so that the ads jump into the circle of vision of browsers who are seeking information from those sites.
A successful SEM plan is the approach to capture the relationships amongst information searchers, businesses, and search engines.
Search engines were not important to some industries in the past, but over the past years the use of search engines for accessing information has become vital to increase business opportunities.[31] The use of SEM strategic tools for businesses such as tourism can attract potential consumers to view their products, but it could also pose various challenges.[32] These challenges could be the competition that companies face amongst their industry and other sources of information that could draw the attention of online consumers.[31] To assist the combat of challenges, the main objective for businesses applying SEM is to improve and maintain their ranking as high as possible on SERPs so that they can gain visibility.
Therefore, search engines are adjusting and developing algorithms and the shifting criteria by which web pages are ranked sequentially to combat against search engine misuse and spamming, and to supply the most relevant information to searchers.[31] This could enhance the relationship amongst information searchers, businesses, and search engines by understanding the strategies of marketing to attract business.
Search engine results pages (SERP) are the pages displayed by search engines in response to a query by a user.
The main component of the SERP is the listing of results that are returned by the search engine in response to a keyword query.
The results are of two general types, organic search (i.e., retrieved by the search engine's algorithm) and sponsored search (i.e., advertisements).
The results are normally ranked by relevance to the query.
Each result displayed on the SERP normally includes a title, a link that points to the actual page on the Web, and a short description showing where the keywords have matched content within the page for organic results.
For sponsored results, the advertiser chooses what to display.
Due to the huge number of items that are available or related to the query, there are usually several pages in response to a single search query as the search engine or the user's preferences restrict viewing to a subset of results per page.
Each succeeding page will tend to have lower ranking or lower relevancy results.
Just like the world of traditional print media and its advertising, this enables competitive pricing for page real estate, but is complicated by the dynamics of consumer expectations and intent— unlike static print media where the content and the advertising on every page is the same all of the time for all viewers, despite such hard copy being localized to some degree, usually geographic, like state, metro-area, city, or neighbourhood, search engine results can vary based on individual factors such as browsing habits.[1] The organic search results, query, and advertisements are the three main components of the SERP, However, the SERP of major search engines, like Google, Yahoo!, and Bing, may include many different types of enhanced results (organic search, and sponsored) such as rich snippets, images, maps, definitions, answer boxes, videos or suggested search refinements.
A recent study revealed that 97% of queries in Google returned at least one rich feature.[2] The major search engines visually differentiate specific content types such as images, news, and blogs.
Many content types have specialized SERP templates and visual enhancements on the first search results page.
Also known as 'user search string', this is the word or set of words that are typed by the user in the search bar of the search engine.
The search box is located on all major search engines like Google, Yahoo, and Bing.
Users indicate the topic desired based on the keywords they enter into the search box in the search engine.
In the competition between search engines to draw the attention of more users and advertisers, consumer satisfaction has been a driving force in the evolution of the search algorithm applied to better filter the results by relevancy.
Search queries are no longer successful based upon merely finding words that match purely by spelling.
Intent and expectations have to be derived to determine whether the appropriate result is a match based upon the broader meanings drawn from context.
And that sense of context has grown from simple matching of words, and then of phrases, to the matching of ideas.
And the meanings of those ideas change over time and context.
Successful matching can be crowdsourced, what are others currently searching for and clicking on, when one enters keywords related to those other searches.
And the crowdsourcing may be focused based upon one's own social networking.
With the advent of portable devices, smartphones, and wearable devices, watches and various sensors, these provide ever more contextual dimensions for consumer and advertiser to refine and maximize relevancy using such additional factors that may be gleaned like: a person's relative health, wealth, and various other status, time of day, personal habits, mobility, location, weather, and nearby services and opportunities, whether urban or suburban, like events, food, recreation, and business.
Social context and crowdsourcing influences can also be pertinent factors.
The move away from keyboard input and the search box to voice access, aside from convenience, also makes other factors available to varying degrees of accuracy and pertinence, like a person's character, intonation, mood, accent, ethnicity, and even elements overheard from nearby people and the background environment.
Searching is changing from explicit keywords: on TV show w, did x marry y or z, or election results for candidate x in county y for this date z, or final scores for team x in game y for this date z to vocalizing from a particular time and location: hey, so who won.
And getting the results that one expects.
Organic SERP listings are the natural listings generated by search engines based on a series of metrics that determine their relevance to the searched term.
Webpages that score well on a search engine's algorithmic test show in this list.
These algorithms are generally based upon factors such as quality and relevance of the content, expertise, authoritativeness, and trustworthiness of the website and author on a given topic, good user experience and backlinks.[3] People tend to view the first results on the first page.[4] Each page of search engine results usually contains 10 organic listings (however some results pages may have fewer organic listings).
According to a 2019 study,[5] the CTR's for the first page goes as follows: Every major search engine with significant market share accepts paid listings.
This unique form of search engine advertising guarantees that your site will appear in the top results for the keywords you target.
Paid search listings are also called sponsored listings and/or Pay Per Click (PPC) listings.
Rich snippets are displayed by Google in the search results pages when a website contains content in structured data markup.
Structured data markup helps the Google algorithm to index and understand the content better.
Google supports rich snippets for the following data types:[6] Featured Snippet is a summary of an answer to user's query.
This snippet appears at the top of organic results on SERP.
Google supports the following types of featured snippets:[7] Search engines like Google or Bing have started to expand their data into Encyclopedia and other rich sources of information.
Google for example calls this sort of information "Knowledge Graph", if a search query matches it will display an additional sub-window on right hand side with information from its sources.[8][9] Information about the product (example Nike), hotels, events, flights, places, businesses, people, books and movies, countries, sports groups, architecture and more can be obtained that way.
Major search engines like Google, Yahoo!, and Bing primarily use content contained within the page and fallback to metadata tags of a web page to generate the content that makes up a search snippet.[10] Generally, the HTML title tag will be used as the title of the snippet while the most relevant or useful contents of the web page (description tag or page copy) will be used for the description.
Search engine result pages are protected from automated access by a range of defensive mechanisms and the terms of service.[11] These result pages are the primary data source for Search engine optimization, the website placement for competitive keywords became an important field of business and interest.
Google has even used Twitter to warn users against this practice[12] The sponsored (creative) results on Google can cost a large amount of money for advertisers.
The most expensive keywords are for legal services, especially personal injury lawyers in highly competitive markets.
These keywords range in the hundreds of USD, while the most expensive is nearly 1000 USD for each sponsored click.
The process of harvesting search engine result pages data is usually called "search engine scraping" or in a general form "web crawling" and generates the data SEO related companies need to evaluate website competitive organic and sponsored rankings.
This data can be used to track the position of websites and show the effectiveness of SEO as well as keywords that may need more SEO investment to rank higher.
This is a List of search engines, including web search engines, selection-based search engines, metasearch engines, desktop search tools, and web portals and vertical market websites that have a search facility for online databases.
For a list of search engine software, see List of enterprise search vendors.
* Powered by Bing ** Powered by Google *** Metasearch engine † Main website is a portal General: Academic materials only: Search engines dedicated to a specific kind of information These search engines work across the BitTorrent protocol.
Desktop search engines listed on a light purple background are no longer in active development.
A Metasearch engine (or search aggregator) is an online Information retrieval tool that uses the data of a web search engine to produce its own results.[1][2] Metasearch engines take input from a user and immediately query search engines for results.
Sufficient data is gathered, ranked, and presented to the users.
Problems such as spamming reduces the accuracy and precision of results.[3] The process of fusion aims to improve the engineering of a Metasearch engine.[4] Examples of Metasearch engines include Skyscanner and Kayak.com, which aggregate search results of online travel agencies and provider websites and Excite, which aggregates results from internet search engines.
The first person to incorporate the idea of meta searching was Daniel Dreilinger of Colorado State University .
He developed SearchSavvy, which let users search up to 20 different search engines and directories at once.
Although fast, the search engine was restricted to simple searches and thus wasn't reliable.
University of Washington student Eric Selberg released a more "updated" version called MetaCrawler.
This search engine improved on SearchSavvy's accuracy by adding its own search syntax behind the scenes, and matching the syntax to that of the search engines it was probing.
Metacrawler reduced the amount of search engines queried to 6, but although it produced more accurate results, it still wasn't considered as accurate as searching a query in an individual engine.[5] On May 20, 1996, HotBot, then owned by Wired, was a search engine with search results coming from the Inktomi and Direct Hit databases.
It was known for its fast results and as a search engine with the ability to search within search results.
Upon being bought by Lycos in 1998, development for the search engine staggered and its market share fell drastically.
After going through a few alterations, HotBot was redesigned into a simplified search interface, with its features being incorporated into Lycos' website redesign.[6] A Metasearch engine called Anvish was developed by Bo Shu and Subhash Kak in 1999; the search results were sorted using instantaneously trained neural networks.[7] This was later incorporated into another Metasearch engine called Solosearch.[8] In August 2000, India got its first meta search engine when HumHaiIndia.com was launched.[9] It was developed by the then 16 year old Sumeet Lamba.[10] The website was later rebranded as Tazaa.com.[11] Ixquick is a search engine known for its privacy policy statement.
Developed and launched in 1998 by David Bodnick, it is owned by Surfboard Holding BV.
On June 2006, Ixquick began to delete private details of its users following the same process with Scroogle.
Ixquick's privacy policy includes no recording of users' IP addresses, no identifying cookies, no collection of personal data, and no sharing of personal data with third parties.[12] It also uses a unique ranking system where a result is ranked by stars.
The more stars in a result, the more search engines agreed on the result.
In April 2005, Dogpile, then owned and operated by InfoSpace, Inc., collaborated with researchers from the University of Pittsburgh and Pennsylvania State University to measure the overlap and ranking differences of leading Web search engines in order to gauge the benefits of using a Metasearch engine to search the web.
Results found that from 10,316 random user-defined queries from Google, Yahoo!, and Ask Jeeves, only 3.2% of first page search results were the same across those search engines for a given query.
Another study later that year using 12,570 random user-defined queries from Google, Yahoo!, MSN Search, and Ask Jeeves found that only 1.1% of first page search results were the same across those search engines for a given query.[13] By sending multiple queries to several other search engines this extends the coverage data of the topic and allows more information to be found.
They use the indexes built by other search engines, aggregating and often post-processing results in unique ways.
A Metasearch engine has an advantage over a single search engine because more results can be retrieved with the same amount of exertion.[2] It also reduces the work of users from having to individually type in searches from different engines to look for resources.[2] Metasearching is also a useful approach if the purpose of the user’s search is to get an overview of the topic or to get quick answers.
Instead of having to go through multiple search engines like Yahoo! or Google and comparing results, Metasearch engines are able to quickly compile and combine results.
They can do it either by listing results from each engine queried with no additional post-processing (Dogpile) or by analyzing the results and ranking them by their own rules (IxQuick, Metacrawler, and Vivismo).
A Metasearch engine can also hide the searcher's IP address from the search engines queried thus providing privacy to the search.
It is in view of this that the French government in 2018 decreed that all government searches be done using Qwant, which is believed to be a Metasearch engine.[14] Metasearch engines are not capable of parsing query forms or able to fully translate query syntax.
The number of hyperlinks generated by Metasearch engines are limited, and therefore do not provide the user with the complete results of a query.[15] The majority of Metasearch engines do not provide over ten linked files from a single search engine, and generally do not interact with larger search engines for results.
Pay per click links are prioritised and are normally displayed first.[16] Metasearching also gives the illusion that there is more coverage of the topic queried, particularly if the user is searching for popular or commonplace information.
It's common to end with multiple identical results from the queried engines.
It is also harder for users to search with advanced search syntax to be sent with the query, so results may not be as precise as when a user is using an advanced search interface at a specific engine.
This results in many Metasearch engines using simple searching.[17] A Metasearch engine accepts a single search request from the user.
This search request is then passed on to another search engine’s database.
A Metasearch engine does not create a database of web pages but generates a Federated database system of data integration from multiple sources.[18][19][20] Since every search engine is unique and has different algorithms for generating ranked data, duplicates will therefore also be generated.
To remove duplicates,a Metasearch engine processes this data and applies its own algorithm.
A revised list is produced as an output for the user.[citation needed] When a Metasearch engine contacts other search engines, these search engines will respond in three ways: Web pages that are highly ranked on many search engines are likely to be more relevant in providing useful information.[21] However, all search engines have different ranking scores for each website and most of the time these scores are not the same.
This is because search engines prioritise different criteria and methods for scoring, hence a website might appear highly ranked on one search engine and lowly ranked on another.
This is a problem because Metasearch engines rely heavily on the consistency of this data to generate reliable accounts.[21] A Metasearch engine uses the process of Fusion to filter data for more efficient results.
The two main fusion methods used are: Collection Fusion and Data Fusion.
Spamdexing is the deliberate manipulation of search engine indexes.
It uses a number of methods to manipulate the relevance or prominence of resources indexed in a manner unaligned with the intention of the indexing system.
Spamdexing can be very distressing for users and problematic for search engines because the return contents of searches have poor precision.[citation needed] This will eventually result in the search engine becoming unreliable and not dependable for the user.
To tackle Spamdexing, search robot algorithms are made more complex and are changed almost every day to eliminate the problem.[24] It is a major problem for Metasearch engines because it tampers with the Web crawler's indexing criteria, which are heavily relied upon to format ranking lists.
Spamdexing manipulates the natural ranking system of a search engine, and places websites higher on the ranking list than they would naturally be placed.[25] There are three primary methods used to achieve this: Content spam are the techniques that alter the logical view that a search engine has over the page's contents.
Techniques include: Link spam are links between pages present for reasons other than merit.
Techniques include: This is a SEO technique in which different materials and information are sent to the web crawler and to the web browser.[26] It is commonly used as a spamdexing technique because it can trick search engines into either visiting a site that is substantially different from the search engine description or giving a certain site a higher ranking.
A number of metrics are available to marketers interested in search engine optimization.
Search engines and software creating such metrics all use their own crawled data to derive at a numeric conclusion on a website's organic search potential.
Since these metrics can be manipulated, they can never be completely reliable for accurate and truthful results.
GooglePageRank (Google PR) is one of the methods Google uses to determine a page's relevance or importance.
Important pages receive a higher PageRank and are more likely to appear at the top of the search results.
Google PageRank (PR) is a measure from 0 - 10.
Google PageRank is based on backlinks.
PageRank works by counting the number and quality of links to a page to determine a rough estimate of how important the website is.
The underlying assumption is that more important websites are likely to receive more links from other websites.[1] However, Google claims there will be no more PageRank updates, rendering this metric as outdated.[2] As of 15 April 2016 Google has officially removed the PR score from their Toolbar.[3] Alexa Traffic Rank is based on the amount of traffic recorded from users that have the Alexa toolbar installed over a period of three months.
A site's ranking is based on a combined measure of Unique Visitors and Pageviews.
Unique Visitors are determined by the number of unique Alexa users who visit a site on a given day.
Pageviews are the total number of Alexa user URL requests for a site.
Alexa's Traffic Ranks are for domains only and do not give separate rankings for subpages within a domain or subdomains.[4] Domain Authority (DA), a website metric developed by Moz, is a predictive metric to determine a website's traffic and organic search engine rankings.
Domain Authority is based on different link metrics, such as number of linking root domains, number of total backlinks, and the distance of backlinks from the home page of websites.[5] Similar to many other websites like Alexa, Netcraft features a toolbar that provides users with the ability to view page-hit popularity and various web server metrics along with aggregated user provided website feedback.
Compared to Domain Authority which determines the ranking strength of an entire domain or subdomain, Page Authority measures the strength of an individual page.[6] It's a score developed by Moz on a 100-point logarithmic scale.
Unlike TrustFlow, domain authority does not account for spam.
Local search engine optimization (local SEO) is similar to (national) SEO in that it is also a process affecting the visibility of a website or a web page in a web search engine's unpaid results (SERP- search engine results page) often referred to as "natural", "organic", or "earned" results.[1] In general, the higher ranked on the search results page and more frequently a site appears in the search results list, the more visitors it will receive from the search engine's users; these visitors can then be converted into customers.[2] Local SEO, however, differs in that it is focused on optimizing a business' online presence so that its web pages will be displayed by search engines when users enter local searches for its products or services.[3] Ranking for local search involves a similar process to general SEO but includes some specific elements to rank a business for local search.
For example, local SEO is all about ‘optimizing‘ your online presence to attract more business from relevant local searches.
The majority of these searches take place on Google, Yahoo, Bing and other search engines but for better optimization in your local area you should also use sites like Yelp, Angie's List, LinkedIn, Local business directories, social media channels and others.[4] The origin of local SEO can be traced back[5] to 2003-2005 when search engines tried to provide people with results in their vicinity as well as additional information such as opening times of a store, listings in maps, etc.
Local SEO has evolved over the years to provide a targeted online marketing approach that allows local businesses to appear based on a range of local search signals, providing a distinct difference from broader organic SEO which prioritises relevance of search over a distance of searcher.
Local searches trigger search engines to display two types of results on the Search engine results page: local organic results and the 'Local Pack'.[3] The local organic results include web pages related to the search query with local relevance.
These often include directories such as Yelp, Yellow Pages, Facebook, etc.[3] The Local Pack displays businesses that have signed up with Google and taken ownership of their 'Google My Business' (GMB) listing.
The information displayed in the GMB listing and hence in the Local Pack can come from different sources:[6] Depending on the searches, Google can show relevant local results in Google Maps or Search.
This is true on both mobile and desktop devices.[7] Google has added a new Q&A features to Google Maps allowing users to submit questions to owners and allowing these to respond.[8].
This Q&A feature is tied to the associated Google My Business account.
Google My Business (GMB) is a free tool that allows businesses to create and manage their Google listing.
These listings must represent a physical location that a customer can visit.
A Google My Business listing appears when customers search for businesses either on Google Maps or in Google SERPs.
The accuracy of these listings is a local ranking factor.
Major search engines have algorithms that determine which local businesses rank in local search.
Primary factors that impact a local business's chance of appearing in local search include proper categorization in business directories, a business's name, address, and phone number (NAP) being crawlable on the website, and citations (mentions of the local business on other relevant websites like a chamber of commerce website).[9] In 2016, a study using statistical analysis assessed how and why businesses ranked in the Local Packs and identified positive correlations between local rankings and 100+ ranking factors.[10] Although the study cannot replicate Google's algorithm, it did deliver several interesting findings: Prominence, relevance, and distance are the three main criteria Google claims to use in its algorithms to show results that best match a user's query.[12] According to a group of local SEO experts who took part in a survey, links and reviews are more important than ever to rank locally.[13] As a result of both Google as well as Apple offering "near me" as an option to users, some authors[14] report on how Google Trends shows very significant increases in "near me" queries.
The same authors also report that the factors correlating the most with Local Pack ranking for "near me" queries include the presence of the "searched city and state in backlinks' anchor text" as well as the use of the " 'near me' in internal link anchor text" An important update to Google's local algorithm, rolled out on the 1st of September 2016.[15] Summary of the update on local search results: As previously explained (see above), the Possum update led similar listings, within the same building, or even located on the same street, to get filtered.
As a result, only one listing "with greater organic ranking and stronger relevance to the keyword" would be shown.[16] After the Hawk update on 22 August 2017, this filtering seems to apply only to listings located within the same building or close by (e.g.
50 feet), but not to listings located further away (e.g.325 feet away).[16] As previously explained (see above), reviews are deemed to be an important ranking factor.
Joy Hawkins, a Google Top Contributor and local SEO expert, highlights the problems due to fake reviews:[17]
Social media optimization (SMO) is the use of a number of outlets and communities to generate publicity to increase the awareness of a product, service brand or event.
Types of social media involved include RSS feeds, social news and bookmarking sites, as well as social networking sites, such as Facebook, Instagram, Twitter, video sharing websites and blogging sites.
SMO is similar to search engine optimization, in that the goal is to generate web traffic and increase awareness for a website.
In general, Social media optimization refers to optimizing a website and its content to encourage more users to use and share links to the website across social media and networking sites.
SMO also refers to software tools that automate this process, or to website experts who undertake this process for clients.
The goal of SMO is to strategically create interesting online content, ranging from well-written text to eye-catching digital photos or video clips that encourages and entices people to engage with a website and then share this content, via its weblink, with their social media contacts and friends.
Common examples of social media engagement are "liking and commenting on posts, retweeting, embedding, sharing, and promoting content".[1] Social media optimization is also an effective way of implementing online reputation management (ORM), meaning that if someone posts bad reviews of a business, a SMO strategy can ensure that the negative feedback is not the first link to come up in a list of search engine results.[2] In the 2010s, with social media sites overtaking TV as a source for news for young people, news organisations have become increasingly reliant on social media platforms for generating web traffic.
Publishers such as The Economist employ large social media teams to optimise their online posts and maximise traffic,[3] while other major publishers now use advanced artificial intelligence (AI) technology to generate higher volumes of web traffic.[4] Social media optimization is becoming an increasingly important factor in search engine optimization, which is the process of designing a website in a way so that it has as high a ranking as possible on search engines.
As search engines are increasingly utilizing the recommendations of users of social networks such as Reddit, Facebook, Tumblr, Twitter, YouTube, LinkedIn, Pinterest, Instagram to rank pages in the search engine result pages.[citation needed] The implication is that when a webpage is shared or "liked" by a user on a social network, it counts as a "vote" for that webpage's quality.
Thus, search engines can use such votes accordingly to properly ranked websites in search engine results pages.
Furthermore, since it is more difficult to top the scales or influence the search engines in this way, search engines are putting more stock into social search.[5] This, coupled with increasingly personalized search based on interests and location, has significantly increased the importance of a social media presence in search engine optimization.
Due to personalized search results, location-based social media presences on websites such as Yelp, Google Places, Foursquare, and Yahoo! Local have become increasingly important.
While Social media optimization is related to search engine marketing, it differs in several ways.
Primarily, SMO focuses on driving web traffic from sources other than search engines, though improved search engine ranking is also a benefit of successful Social media optimization.
Further, SMO is helpful to target particular geographic regions in order to target and reach potential customers.
This helps in lead generation (finding new customers) and contributes to high conversion rates (i.e., converting previously uninterested individuals into people who are interested in a brand or organization).
Social media optimization is in many ways connected to the technique of viral marketing or "viral seeding" where word of mouth is created through the use of networking in social bookmarking, video and photo sharing websites.
An effective SMO campaign can harness the power of viral marketing; for example, 80% of activity on Pinterest is generated through "repinning."[citation needed] Furthermore, by following social trends and utilizing alternative social networks, websites can retain existing followers while also attracting new ones.
This allows businesses to build an online following and presence, all linking back to the company's website for increased traffic.
For example, with an effective social bookmarking campaign, not only can website traffic be increased, but a site's rankings can also be increased.
In a similar way, the engagement with blogs creates a similar result by sharing content through the use of RSS in the blogosphere and special blog search engines.
Social media optimization is considered an integral part of an online reputation management (ORM) or search engine reputation management (SERM) strategy for organizations or individuals who care about their online presence.[6] SMO is one of six key influencers that affect Social Commerce Construct (SCC).
Online activities such as consumers' evaluations and advices on products and services constitute part of what creates a Social Commerce Construct (SCC).[7] Social media optimization is not limited to marketing and brand building.
Increasingly, smart businesses are integrating social media participation as part of their knowledge management strategy (i.e., product/service development, recruiting, employee engagement and turnover, brand building, customer satisfaction and relations, business development and more).
Additionally, Social media optimization can be implemented to foster a community of the associated site, allowing for a healthy business-to-consumer (B2C) relationship.[8] According to technologist Danny Sullivan, the term "Social media optimization" was first used and described by marketer Rohit Bhargava[9][10] on his marketing blog in August 2006.
In the same post, Bhargava established the five important rules of Social media optimization.
Bhargava believed that by following his rules, anyone could influence the levels of traffic and engagement on their site, increase popularity, and ensure that it ranks highly in search engine results.
An additional 11 SMO rules have since been added to the list by other marketing contributors.
The 16 rules of SMO, according to one source, are as follows:[11] Bhargava's initial five rules were more specifically designed to SMO, while the list is now much broader and addresses everything that can be done across different social media platforms.
According to author and CEO of TopRank Online Marketing, Lee Odden, a Social Media Strategy is also necessary to ensure optimization.
This is a similar concept to Bhargava's list of rules for SMO.
The Social Media Strategy may consider:[12] According to Lon Safko and David K.
Brake in The Social Media Bible, it is also important to act like a publisher by maintaining an effective organisational strategy, to have an original concept and unique "edge" that differentiates one's approach from competitors, and to experiment with new ideas if things do not work the first time.[2] If a business is blog-based, an effective method of SMO is using widgets that allow users to share content to their personal social media platforms.
This will ultimately reach a wider target audience and drive more traffic to the original post.
Blog widgets and plug-ins for post-sharing are most commonly linked to Facebook, Google+, LinkedIn, and Twitter.
They occasionally also link to social media platforms such as StumbleUpon, Tumblr, and Pinterest.
Many sharing widgets also include user counters which indicate how many times the content has been liked and shared across different social media pages.
This can influence whether or not new users will engage with the post, and also gives businesses an idea of what kind of posts are most successful at engaging audiences.
By using relevant and trending keywords in titles and throughout blog posts, a business can also increase search engine optimization and the chances of their content of being read and shared by a large audience.[12] The root of effective SMO is the content that is being posted, so professional content creation tools can be very beneficial.
These can include editing programs such as Photoshop, GIMP, Final Cut Pro, and Dreamweaver.
Many websites also offer customization options such as different layouts to personalize a page and create a point of difference.[2] With social media sites overtaking TV as a source for news for young people, news organisations have become increasingly reliant on social media platforms for generating traffic.
A report by Reuters Institute for the Study of Journalism described how a 'second wave of disruption' had hit news organisations,[13] with publishers such as The Economist having to employ large social media teams to optimism their posts and maximize traffic.[3] Major publishers such as Le Monde and Vogue now use advanced artificial intelligence (AI) technology from Echobox to post stories more effectively and generate higher volumes of traffic.[4] Within the context of the publishing industry, even professional fields are utilizing SMO.
Because doctors want to maximize exposure to their research findings SMO has also found a place in the medical field.[14] Social media gaming is online gaming activity performed through social media sites with friends and online gaming activity that promotes social media interaction.
Examples of the former include FarmVille, Clash of Clans, Clash Royale, FrontierVille, and Mafia Wars.
In these games a player's social network is exploited to recruit additional players and allies.
An example of the latter is Empire Avenue, a virtual stock exchange where players buy and sell shares of each other's social network worth.
Nielsen Media Research estimates that, as of June 2010, social networking and playing online games account for about one-third of all online activity by Americans.[15] Facebook has in recent years become a popular channel for advertising, alongside traditional forms such as television, radio, and print.
With over 1 billion active users, and 50% of those users logging into their accounts every day[16] it is an important communication platform that businesses can utilize and optimize to promote their brand and drive traffic to their websites.
There are three commonly used strategies to increase advertising reach on Facebook: Improving effectiveness and increasing network size are organic approaches, while buying more reach is a paid approach which does not require any further action.[17] Most businesses will attempt an "organic" approach to gaining a significant following before considering a paid approach.
Because Facebook requires a login, it is important that posts are public to ensure they will reach the widest possible audience.
Posts that have been heavily shared and interacted with by users are displayed as 'highlighted posts' at the top of newsfeeds.
In order to achieve this status, the posts need to be engaging, interesting, or useful.
This can be achieved by being spontaneous, asking questions, addressing current events and issues, and optimizing trending hashtags and keywords.
The more engagement a post receives, the further it will spread and the more likely it is to feature on first in search results.
Another organic approach to Facebook optimization is cross-linking different social platforms.
By posting links to websites or social media sites in the profile 'about' section, it is possible to direct traffic and ultimately increase search engine optimization.
Another option is to share links to relevant videos and blog posts.[12] Facebook Connect is a functionality that launched in 2008 to allow Facebook users to sign up to different websites, enter competitions, and access exclusive promotions by logging in with their existing Facebook account details.
This is beneficial to users as they don't have to create a new login every time they want to sign up to a website, but also beneficial to businesses as Facebook users become more likely to share their content.
Often the two are interlinked, where in order to access parts of a website, a user has to like or share certain things on their personal profile or invite a number of friends to like a page.
This can lead to greater traffic flow to a website as it reaches a wider audience.
Businesses have more opportunities to reach their target markets if they choose a paid approach to SMO.
When Facebook users create an account, they are urged to fill out their personal details such as gender, age, location, education, current and previous employers, religious and political views, interests, and personal preferences such as movie and music tastes.
Facebook then takes this information and allows advertisers to use it to determine how to best market themselves to users that they know will be interested in their product.
This can also be known as micro-targeting.
If a user clicks on a link to like a page, it will show up on their profile and newsfeed.
This then feeds back into organic Social media optimization, as friends of the user will see this and be encouraged to click on the page themselves.
Although advertisers are buying mass reach, they are attracting a customer base with a genuine interest in their product.
Once a customer base has been established through a paid approach, businesses will often run promotions and competitions to attract more organic followers.[11] The number of businesses that use Facebook to advertise also holds significant relevance.
Currently there are three million businesses that advertise on Facebook.[18] This makes Facebook the world's largest platform for social media advertising.
What also holds importance is the amount of money leading businesses are spending on Facebook advertising alone.
Procter & Gamble spend $60 million every year on Facebook advertising.[19] Other advertisers on Facebook include Microsoft, with a yearly spend of £35 million, Amazon, Nestle and American Express all with yearly expenditures above £25 million per year.
Furthermore, the number of small businesses advertising on Facebook is of relevance.
This number has grown rapidly over the upcoming years and demonstrates how important social media advertising actually is.
Currently 70% of the UK's small businesses use Facebook advertising.[20] This is a substantial number of advertisers.
Almost half of the world's small businesses use social media marketing product of some sort.
This demonstrates the impact that social media has had on the current digital marketing era.
Archie is a tool for indexing FTP archives, allowing people to find specific files.
It is considered to be the first Internet search engine.[3] The original implementation was written in 1990 by Alan Emtage, then a postgraduate student at McGill University in Montreal, and Bill Heelan, who studied at Concordia University in Montreal and worked at McGill University at the same time.[4] The Archie service began as a project for students and volunteer staff at the McGill University School of Computer Science in 1987,[5] when Peter Deutsch (systems manager for the School[5]), Emtage, and Heelan were asked to connect the School of Computer Science to the Internet.[6] The earliest versions of Archie, written by Alan Emtage, simply contacted a list of FTP archives on a regular basis (contacting each roughly once a month, so as not to waste too many resources of the remote servers) and requested a listing.
These listings were stored in local files to be searched using the Unix grep command.
The name derives from the word "archive" without the v.
Alan Emtage has said that contrary to popular belief, there was no association with the Archie Comics and that he despised them.[7] Despite this, other early Internet search technologies such as Jughead and Veronica were named after characters from the comics.
Anarchie, one of the earliest graphical ftp clients was named for its ability to perform Archie searches.
Archie was developed as a tool for mass discovery and the concept was simple.
The developers populated the engine's servers with databases of anonymous FTP host directories.[8] This was used to find specific file titles since the list was plugged in to a searchable database of FTP sites.[9] Bill Heelan and Peter Deutsch wrote a script allowing people to log in and search collected information using the Telnet protocol at the host "archie.mcgill.ca" [132.206.2.3].[5] Later, more efficient front- and back-ends were developed, and the system spread from a local tool, to a network-wide resource, and a popular service available from multiple sites around the Internet.
The collected data would be exchanged between the neighbouring Archie servers.
The servers could be accessed in multiple ways: using a local client (such as archie or xarchie); telnetting to a server directly; sending queries by electronic mail;[10] and later via a World Wide Web interface.
At the zenith of its fame the Archie (search engine) accounted for 50% of Montreal Internet traffic.[citation needed] In 1992, Emtage along with Peter Deutsch and some financial help of McGill University formed Bunyip Information Systems the world's first company expressly founded for and dedicated to providing Internet information services with a licensed commercial version of the Archie (search engine) used by millions of people worldwide.
Bill Heelan followed them into Bunyip soon after, where he together with Bibi Ali and Sandro Mazzucato was a part of so-called Archie Group.
The group significantly updated the archie database and indexed web-pages.
Work on the search engine was ceased in the late 1990s.
A legacy Archie server is still maintained active for historic purposes in Poland at University of Warsaw's Interdisciplinary Centre for Mathematical and Computational Modelling.
Click-through rate (CTR) is the ratio of users who click on a specific link to the number of total users who view a page, email, or advertisement.
It is commonly used to measure the success of an online advertising campaign for a particular website as well as the effectiveness of email campaigns.[1][2] Click-through rates for ad campaigns vary tremendously.
The very first online display ad shown for AT&T on the website HotWired in 1994, had a 44% Click-through rate.[3] With time, the overall rate of user's clicks on webpage banner ads has decreased.
The purpose of Click-through rates is to measure the ratio of clicks to impressions of an online ad or email marketing campaign.
Generally the higher the CTR the more effective the marketing campaign has been at bringing people to a website.[4] Most commercial websites are designed to elicit some sort of action, whether it be to buy a book, read a news article, watch a music video, or search for a flight.
People rarely visit websites with the intention of viewing advertisements, in the same way that few people watch television to view the commercials.[5] While marketers want to know the reaction of the web visitor, with current technology it is nearly impossible to quantify the emotional reaction to the site and the effect of that site on the firm's brand.
However, Click-through rate is an easy piece of data to acquire.
The Click-through rate measures the proportion of visitors who initiated an advertisement that redirected them to another page where they might purchase an item or learn more about a product or service.
Forms of interaction with advertisements other than clicking is possible, but rare; "Click-through rate" is the most commonly used term to describe the efficacy of an advert.[5] The Click-through rate of an advertisement is the number of times a click is made on the ad, divided by the number of times the ad is "served", that is, shown (also called impressions), expressed as a percentage: Click-through rates for banner ads have decreased over time.[6] When banner ads first started to appear, it was not uncommon to have rates above five percent.
They have fallen since then, currently averaging closer to 0.2 or 0.3 percent.[7] In most cases, a 2% Click-through rate would be considered very successful, though the exact number is hotly debated and would vary depending on the situation.
The average Click-through rate of 3% in the 1990s declined to 2.4%–0.4% by 2002.[8] Since advertisers typically pay more for a high Click-through rate, getting many click-throughs with few purchases is undesirable to advertisers.[7] Similarly, by selecting an appropriate advertising site with high affinity (e.g., a movie magazine for a movie advertisement), the same banner can achieve a substantially higher CTR.
Though personalized ads, unusual formats, and more obtrusive ads typically result in higher Click-through rates than standard banner ads, overly intrusive ads are often avoided by viewers.[8][9] Modern online advertising has moved beyond just using banner ads.
Popular search engines allow advertisers to display ads in with the search results triggered by a search user.
These ads are usually in text format and may include additional links and information like phone numbers, addresses and specific product pages.[10] This additional information moves away from the poor user experience that can be created from intrusive banner ads and provides useful information to the search user, resulting in higher Click-through rates for this format of pay-per-click Advertising.
Having high Click-through rate isn't the only goal for an online advertiser, who may develop campaigns to raise awareness for the overall gain of valuable traffic, sacrificing some Click-through rate for that purpose.
Search engine advertising has become a significant element of the Web browsing experience.
Choosing the right ads for the query and the order in which they are displayed greatly affects the probability that a user will see and click on each ad.
This ranking has a strong impact on the revenue the search engine receives from the ads.
Further, showing the user an ad that they prefer to click on improves user satisfaction.
For these reasons, there is an increasing interest in accurately estimating the Click-through rate of ads in a recommender system.[citation needed] An email Click-through rate is defined as the number of recipients who click one or more links in an email and landed on the sender's website, blog, or other desired destination.
More simply, email Click-through rates represent the number of clicks that your email generated.[11][12] Email Click-through rate is expressed as a percentage, and calculated by dividing the number of click throughs by the number of tracked message deliveries.[13] Most email marketers use this metrics along with open rate, bounce rate and other metrics, to understand the effectiveness and success of their email campaign.[14] In general there is no ideal Click-through rate.
This metric can vary based on the type of email sent, how frequently emails are sent, how the list of recipients is segmented, how relevant the content of the email is to the audience, and many other factors.[15] Even time of day can affect Click-through rate.
Sunday appears to generate considerably higher Click-through rates on average when compared to the rest of the week.[16] Every year various types of research studies are conducted to track the overall effectiveness of Click-through rates in email marketing.[17][18] Experts on Search engine optimization (SEO) have claimed since the mid-2010s that Click-through rate has an impact on organic rankings.
Numerous case studies have been published to support this theory.
Proponents supporting this theory often claim that Click-through rate is a ranking signal for Google's RankBrain algorithm.
In a video interview with Dan Petrovic, he states, "There is absolutely no shadow of a doubt that CTR is a ranking signal.
CTR is not only a ranking signal, CTR is essential to Google’s self-analytics."[19] In an article by Neil Patel, Patel quotes Matt Cutts saying, "It doesn’t really matter how often you show up.
It matters how often you get clicked on..." He also cites a study where a 20% increase in Click-through rates resulted in 30% more organic clicks.[20] Opponents of this theory claim Click-through rate has little or no impact on organic rankings.
Bartosz Góralewicz published the results of an experiment on Search Engine Land where he claims, "Despite popular belief, Click-through rate is not a ranking factor.
Even massive organic traffic won’t affect your website’s organic positions."[21] More recently, Barry Schwartz wrote on Search Engine Land, "...Google has said countless times, in writing, at conferences, that CTR is not used in their ranking algorithm."[22]
A search engine is an information retrieval system designed to help find information stored on a computer system.
The search results are usually presented in a list and are commonly called hits.
Search engines help to minimize the time required to find information and the amount of information which must be consulted, akin to other techniques for managing information overload.[citation needed] The most public, visible form of a search engine is a Web search engine which searches for information on the World Wide Web.
Search engines provide an interface to a group of items that enables users to specify criteria about an item of interest and have the engine find the matching items.
The criteria are referred to as a search query.
In the case of text search engines, the search query is typically expressed as a set of words that identify the desired concept that one or more documents may contain.[1] There are several styles of search query syntax that vary in strictness.
It can also switch names within the search engines from previous sites.
Whereas some text search engines require users to enter two or three words separated by white space, other search engines may enable users to specify entire documents, pictures, sounds, and various forms of natural language.
Some search engines apply improvements to search queries to increase the likelihood of providing a quality set of items through a process known as query expansion.
Query understanding methods can be used as standardize query language.
The list of items that meet the criteria specified by the query is typically sorted, or ranked.
Ranking items by relevance (from highest to lowest) reduces the time required to find the desired information.
Probabilistic search engines rank items based on measures of similarity (between each item and the query, typically on a scale of 1 to 0, 1 being most similar) and sometimes popularity or authority (see Bibliometrics) or use relevance feedback.
Boolean search engines typically only return items which match exactly without regard to order, although the term boolean search engine may simply refer to the use of boolean-style syntax (the use of operators AND, OR, NOT, and XOR) in a probabilistic context.
To provide a set of matching items that are sorted according to some criteria quickly, a search engine will typically collect metadata about the group of items under consideration beforehand through a process referred to as indexing.
The index typically requires a smaller amount of computer storage, which is why some search engines only store the indexed information and not the full content of each item, and instead provide a method of navigating to the items in the search engine result page.
Alternatively, the search engine may store a copy of each item in a cache so that users can see the state of the item at the time it was indexed or for archive purposes or to make repetitive processes work more efficiently and quickly.
Other types of search engines do not store an index.
Crawler, or spider type search engines (a.k.a.
real-time search engines) may collect and assess items at the time of the search query, dynamically considering additional items based on the contents of a starting item (known as a seed, or seed URL in the case of an Internet crawler).
Meta search engines store neither an index nor a cache and instead simply reuse the index or results of one or more other search engine to provide an aggregated, final set of results.
Google Search, also referred to as Google Web Search or simply Google, is a web search engine developed by Google.
It is the most used search engine on the World Wide Web across all platforms, with 92.62% market share as of June 2019,[4] handling more than 5.4 billion searches each day.[5] The order of search results returned by Google is based, in part, on a priority rank system called "PageRank".
Google Search also provides many different options for customized search, using symbols to include, exclude, specify or require certain search behavior, and offers specialized interactive experiences, such as flight status and package tracking, weather forecasts, currency, unit and time conversions, word definitions, and more.
The main purpose of Google Search is to search for text in publicly accessible documents offered by web servers, as opposed to other data, such as images or data contained in databases.
It was originally developed in 1997 by Larry Page, Sergey Brin, and Scott Hassan.[6][7][8] In June 2011, Google introduced "Google Voice Search" to search for spoken, rather than typed, words.[9] In May 2012, Google introduced a Knowledge Graph semantic search feature in the U.S.
Analysis of the frequency of search terms may indicate economic, social and health trends.[10] Data about the frequency of use of search terms on Google can be openly inquired via Google Trends and have been shown to correlate with flu outbreaks and unemployment levels, and provide the information faster than traditional reporting methods and surveys.
As of mid-2016, Google's search engine has begun to rely on deep neural networks.[11] Competitors of Google include Baidu and Soso.com in China; Naver.com and Daum.net in South Korea; Yandex in Russia; Seznam.cz in the Czech Republic; Qwant in France;[12] Yahoo in Japan, Taiwan and the US, as well as Bing and DuckDuckGo.[13] Some smaller search engines offer facilities not available with Google, e.g.
not storing any private or tracking information.
Within the U.S., as of July 2018, Bing handled 24.2 percent of all search queries.
During the same period of time, Oath (formerly known as Yahoo) had a search market share of 11.5 percent.
Market leader Google generated 63.2 percent of all core search queries in the U.S.[14] Google indexes hundreds of terabytes of information from web pages.[15] For websites that are currently down or otherwise not available, Google provides links to cached versions of the site, formed by the search engine's latest indexing of that page.[16] Additionally, Google indexes some file types, being able to show users PDFs, Word documents, Excel spreadsheets, PowerPoint presentations, certain Flash multimedia content, and plain text files.[17] Users can also activate "SafeSearch", a filtering technology aimed at preventing explicit and pornographic content from appearing in search results.[18] Despite Google Search's immense index, sources generally assume that Google is only indexing less than 5% of the total Internet, with the rest belonging to the deep web, inaccessible through its search tools.[15][19][20] In 2012, Google changed its search indexing tools to demote sites that had been accused of piracy.[21] In October 2016, Gary Illyes, a webmaster trends analyst with Google, announced that the search engine would be making a separate, primary web index dedicated for mobile devices, with a secondary, less up-to-date index for desktop use.
The change was a response to the continued growth in mobile usage, and a push for web developers to adopt a mobile-friendly version of their websites.[22][23] In December 2017, Google began rolling out the change, having already done so for multiple websites.[24] In August 2009, Google invited web developers to test a new search architecture, codenamed "Caffeine", and give their feedback.
The new architecture provided no visual differences in the user interface, but added significant speed improvements and a new "under-the-hood" indexing infrastructure.
The move was interpreted in some quarters as a response to Microsoft's recent release of an upgraded version of its own search service, renamed Bing, as well as the launch of Wolfram Alpha, a new search engine based on "computational knowledge".[25][26] Google announced completion of "Caffeine" on June 8, 2010, claiming 50% fresher results due to continuous updating of its index.[27] With "Caffeine", Google moved its back-end indexing system away from MapReduce and onto Bigtable, the company's distributed database platform.[28][29] In August 2018, Danny Sullivan from Google announced a broad core algorithm update.
As per current analysis done by the industry leaders Search Engine Watch and Search Engine Land, the update was to drop down the medical and health related websites that were not user friendly and were not providing good user experience.
This is why the industry experts named it "Medic".[30] Google reserves very high standards for YMYL (Your Money or Your Life) pages.
This is because misinformation can affect users financially, physically or emotionally.
Therefore, the update targeted particularly those YMYL pages that have low-quality content and misinformation.
This resulted in the algorithm targeting health and medical related websites more than others.
However, many other websites from other industries were also negatively affected.[31] Google Search consists of a series of localized websites.
The largest of those, the google.com site, is the top most-visited website in the world.[32] Some of its features include a definition link for most searches including dictionary words, the number of results you got on your search, links to other searches (e.g.
for words that Google believes to be misspelled, it provides a link to the search results using its proposed spelling), and many more.
Google Search accepts queries as normal text, as well as individual keywords.[33] It automatically corrects misspelled words, and yields the same results regardless of capitalization.[33] For more customized results, one can use a wide variety of operators, including, but not limited to:[34][35] Google applies query expansion to submitted search queries, using techniques to deliver results that it considers "smarter" than the query users actually submitted.
This technique involves several steps, including:[36] In 2008, Google started to give users autocompleted search suggestions in a list below the search bar while typing.[37] Google's homepage includes a button labeled "I'm Feeling Lucky".
This feature originally allowed users to type in their search query, click the button and be taken directly to the first result, bypassing the search results page.
With the 2010 announcement of Google Instant, an automatic feature that immediately displays relevant results as users are typing in their query, the "I'm Feeling Lucky" button disappears, requiring that users opt-out of Instant results through search settings in order to keep using the "I'm Feeling Lucky" functionality.[38] In 2012, "I'm Feeling Lucky" was changed to serve as an advertisement for Google services; users hover their computer mouse over the button, it spins and shows an emotion ("I'm Feeling Puzzled" or "I'm Feeling Trendy", for instance), and, when clicked, takes users to a Google service related to that emotion.[39] Tom Chavez of "Rapt", a firm helping to determine a website's advertising worth, estimated in 2007 that Google lost $110 million in revenue per year due to use of the button, which bypasses the advertisements found on the search results page.[40] Besides the main text-based search-engine features of Google Search, it also offers multiple quick, interactive experiences.
These include, but are not limited to:[41][42][43] During Google's developer conference, Google I/O, in May 2013, the company announced that, on Google Chrome and Chrome OS, users would be able to say "OK Google", with the browser initiating an audio-based search, with no button presses required.
After having the answer presented, users can follow up with additional, contextual questions; an example include initially asking "OK Google, will it be sunny in Santa Cruz this weekend?", hearing a spoken answer, and reply with "how far is it from here?"[44][45] An update to the Chrome browser with voice-search functionality rolled out a week later, though it required a button press on a microphone icon rather than "OK Google" voice activation.[46] Google released a browser extension for the Chrome browser, named with a "beta" tag for unfinished development, shortly thereafter.[47] In May 2014, the company officially added "OK Google" into the browser itself;[48] they removed it in October 2015, citing low usage, though the microphone icon for activation remained available.[49] In May 2016, 20% of search queries on mobile devices were done through voice.[50] "Universal search" was launched by Google on May 16, 2007 as an idea that merged the results from different kinds of search types into one.
Prior to Universal search, a standard Google Search would consist of links only to websites.
Universal search, however, incorporates a wide variety of sources, including websites, news, pictures, maps, blogs, videos, and more, all shown on the same search results page.[51][52] Marissa Mayer, then-vice president of search products and user experience, described the goal of Universal search as "we're attempting to break down the walls that traditionally separated our various search properties and integrate the vast amounts of information available into one simple set of search results.[53] In June 2017, Google expanded its search results to cover available job listings.
The data is aggregated from various major job boards and collected by analyzing company homepages.
Initially only available in English, the feature aims to simplify finding jobs suitable for each user.[54][55] In May 2009, Google announced that they would be parsing website microformats in order to populate search result pages with "Rich snippets".
Such snippets include additional details about results, such as displaying reviews for restaurants and social media accounts for individuals.[56] In May 2016, Google expanded on the "Rich snippets" format to offer "Rich cards", which, similarly to snippets, display more information about results, but shows them at the top of the mobile website in a swipeable carousel-like format.[57] Originally limited to movie and recipe websites in the United States only, the feature expanded to all countries globally in 2017.[58] Now the web publishers can have greater control over the rich snippets.
Preview settings from these meta tags will become effective in mid-to-late October 2019 and may take about a week for the global rollout to complete.[59] The Knowledge Graph is a knowledge base used by Google to enhance its search engine's results with information gathered from a variety of sources.[60] This information is presented to users in a box to the right of search results.[61] Knowledge Graph boxes were added to Google's search engine in May 2012,[60] starting in the United States, with international expansion by the end of the year.[62] The information covered by the Knowledge Graph grew significantly after launch, tripling its original size within seven months,[63] and being able to answer "roughly one-third" of the 100 billion monthly searches Google processed in May 2016.[64] The information is often used as a spoken answer in Google Assistant[65] and Google Home searches.[66] The Knowledge Graph has been criticized for providing answers without source attribution.[64] Google Search has been accused of using using so called zero-click search to prevent large part of the traffic leaving its page to third-party publishers.
As result 71% searches end on the Google Search page.
In case of one specific query out of 890'000 searches on Google, only 30'000 resulted in the user clicking on the results website.[67] In May 2017, Google enabled a new "Personal" tab in Google Search, letting users search for content in their Google accounts' various services, including email messages from Gmail and photos from Google Photos.[68][69] The Google feed is a personalized stream of articles, videos, and other news-related content.
The feed contains a "mix of cards" which show topics of interest based on users' interactions with Google, or topics they choose to follow directly.[70] Cards include, "links to news stories, YouTube videos, sports scores, recipes, and other content based on what [Google] determined you're most likely to be interested in at that particular moment."[70] Users can also tell Google they're not interested in certain topics to avoid seeing future updates.
The Google feed launched in December 2016[71] and received a major update in July 2017.[72] As of May 2018, the Google feed can be found on the Google app and by swiping left on the home screen of certain Android devices.
As of 2019, Google will not allow political campaigns worldwide to target their advertisement to people to make them vote.[73] Google's rise was largely due to a patented algorithm called PageRank which helps rank web pages that match a given search string.[74] When Google was a Stanford research project, it was nicknamed BackRub because the technology checks backlinks to determine a site's importance.
Other keyword-based methods to rank search results, used by many search engines that were once more popular than Google, would check how often the search terms occurred in a page, or how strongly associated the search terms were within each resulting page.
The PageRank algorithm instead analyzes human-generated links assuming that web pages linked from many important pages are also important.
The algorithm computes a recursive score for pages, based on the weighted sum of other pages linking to them.
PageRank is thought to correlate well with human concepts of importance.
In addition to PageRank, Google, over the years, has added many other secret criteria for determining the ranking of resulting pages.
This is reported to comprise over 250 different indicators,[75][76] the specifics of which are kept secret to avoid difficulties created by scammers and help Google maintain an edge over its competitors globally.
PageRank was influenced by a similar page-ranking and site-scoring algorithm earlier used for RankDex, developed by Robin Li in 1996.
Larry Page's patent for PageRank filed in 1998 includes a citation to Li's earlier patent.
Li later went on to create the Chinese search engine Baidu in 2000.[77][78][79] In a potential hint of Google's future direction of their Search algorithm, Google's then chief executive Eric Schmidt, said in a 2007 interview with the Financial Times: "The goal is to enable Google users to be able to ask the question such as 'What shall I do tomorrow?' and 'What job shall I take?'".[80] Schmidt reaffirmed this during a 2010 interview with the Wall Street Journal: "I actually think most people don't want Google to answer their questions, they want Google to tell them what they should be doing next."[81] In 2013 the European Commission found that Google Search favored Google's own products, instead of the best result for consumers' needs.[82] In February 2015 Google announced a major change to its mobile search algorithm which would favor mobile friendly over other websites.
Nearly 60% of Google Searches come from mobile phones.
Google says it wants users to have access to premium quality websites.
Those websites which lack a mobile friendly interface would be ranked lower and it is expected that this update will cause a shake-up of ranks.
Businesses who fail to update their websites accordingly could see a dip in their regular websites traffic.[83] Because Google is the most popular search engine, many webmasters attempt to influence their website's Google rankings.
An industry of consultants has arisen to help websites increase their rankings on Google and on other search engines.
This field, called search engine optimization, attempts to discern patterns in search engine listings, and then develop a methodology for improving rankings to draw more searchers to their clients' sites.
Search engine optimization encompasses both "on page" factors (like body copy, title elements, H1 heading elements and image alt attribute values) and Off Page Optimization factors (like anchor text and PageRank).
The general idea is to affect Google's relevance algorithm by incorporating the keywords being targeted in various places "on page", in particular the title element and the body copy (note: the higher up in the page, presumably the better its keyword prominence and thus the ranking).
Too many occurrences of the keyword, however, cause the page to look suspect to Google's spam checking algorithms.
Google has published guidelines for website owners who would like to raise their rankings when using legitimate optimization consultants.[84] It has been hypothesized, and, allegedly, is the opinion of the owner of one business about which there have been numerous complaints, that negative publicity, for example, numerous consumer complaints, may serve as well to elevate page rank on Google Search as favorable comments.[85] The particular problem addressed in The New York Times article, which involved DecorMyEyes, was addressed shortly thereafter by an undisclosed fix in the Google algorithm.
According to Google, it was not the frequently published consumer complaints about DecorMyEyes which resulted in the high ranking but mentions on news websites of events which affected the firm such as legal actions against it.
Google Search Console helps to check for websites that use duplicate or copyright content.[86] In 2013, Google significantly upgraded its search algorithm with "Hummingbird".
Its name was derived from the speed and accuracy of the hummingbird.[87] The change was announced on September 26, 2013, having already been in use for a month.[88] "Hummingbird" places greater emphasis on natural language queries, considering context and meaning over individual keywords.[87] It also looks deeper at content on individual pages of a website, with improved ability to lead users directly to the most appropriate page rather than just a website's homepage.[89] The upgrade marked the most significant change to Google Search in years, with more "human" search interactions[90] and a much heavier focus on conversation and meaning.[87] Thus, web developers and writers were encouraged to optimize their sites with natural writing rather than forced keywords, and make effective use of technical web development for on-site navigation.[91] On certain occasions, the logo on Google's webpage will change to a special version, known as a "Google Doodle".
This is a picture, drawing, animation or interactive game that includes the logo.
It is usually done for a special event or day although not all of them are well known.[92] Clicking on the Doodle links to a string of Google Search results about the topic.
The first was a reference to the Burning Man Festival in 1998,[93][94] and others have been produced for the birthdays of notable people like Albert Einstein, historical events like the interlocking Lego block's 50th anniversary and holidays like Valentine's Day.[95] Some Google Doodles have interactivity beyond a simple search, such as the famous "Google Pacman" version that appeared on May 21, 2010.
Google offers a "Google Search" mobile app for Android and iOS devices.[96] The mobile apps exclusively feature a "feed", a news feed-style page of continually-updated developments on news and topics of interest to individual users.
Android devices were introduced to a preview of the feed in December 2016,[97] while it was made official on both Android and iOS in July 2017.[98][99] In April 2016, Google updated its Search app on Android to feature "Trends"; search queries gaining popularity appeared in the autocomplete box along with normal query autocompletion.[100] The update received significant backlash, due to encouraging search queries unrelated to users' interests or intentions, prompting the company to issue an update with an opt-out option.[101] In September 2017, the Google Search app on iOS was updated to feature the same functionality.[102] Until May 2013, Google Search had offered a feature to translate search queries into other languages.
A Google spokesperson told Search Engine Land that "Removing features is always tough, but we do think very hard about each decision and its implications for our users.
Unfortunately, this feature never saw much pick up".[103] Instant search was announced in September 2010 as a feature that displayed suggested results while the user typed in their search query.
The primary advantage of the new system was its ability to save time, with Marissa Mayer, then-vice president of search products and user experience, proclaiming that the feature would save 2–5 seconds per search, elaborating that "That may not seem like a lot at first, but it adds up.
With Google Instant, we estimate that we'll save our users 11 hours with each passing second!"[104] Matt Van Wagner of Search Engine Land wrote that "Personally, I kind of like Google Instant and I think it represents a natural evolution in the way search works", and also praised Google's efforts in public relations, writing that "With just a press conference and a few well-placed interviews, Google has parlayed this relatively minor speed improvement into an attention-grabbing front-page news story".[105] The upgrade also became notable for the company switching Google Search's underlying technology from HTML to AJAX.[106] Instant Search could be disabled via Google's "preferences" menu for those who didn't want its functionality.[107] The publication 2600: The Hacker Quarterly compiled a list of words that Google Instant did not show suggested results for, with a Google spokesperson giving the following statement to Mashable:[108] There are a number of reasons you may not be seeing search queries for a particular topic.
Among other things, we apply a narrow set of removal policies for pornography, violence, and hate speech.
It's important to note that removing queries from Autocomplete is a hard problem, and not as simple as blacklisting particular terms and phrases.
In search, we get more than one billion searches each day.
Because of this, we take an algorithmic approach to removals, and just like our search algorithms, these are imperfect.
We will continue to work to improve our approach to removals in Autocomplete, and are listening carefully to feedback from our users.
Our algorithms look not only at specific words, but compound queries based on those words, and across all languages.
So, for example, if there's a bad word in Russian, we may remove a compound word including the transliteration of the Russian word into English.
We also look at the search results themselves for given queries.
So, for example, if the results for a particular query seem pornographic, our algorithms may remove that query from Autocomplete, even if the query itself wouldn't otherwise violate our policies.
This system is neither perfect nor instantaneous, and we will continue to work to make it better.PC Magazine discussed the inconsistency in how some forms of the same topic are allowed; for instance, "lesbian" was blocked, while "gay" was not, and "cocaine" was blocked, while "crack" and "heroin" were not.
The report further stated that seemingly normal words were also blocked due to pornographic innuendos, most notably "scat", likely due to having two completely separate contextual meanings, one for music and one for a sexual practice.[109] On July 26, 2017, Google removed Instant results, due to a growing number of searches on mobile devices, where interaction with search, as well as screen sizes, differ significantly from a computer.[110][111] Various search engines provide encrypted Web search facilities.
In May 2010 Google rolled out SSL-encrypted web search.[112] The encrypted search was accessed at encrypted.google.com[113] However, the web search is encrypted via Transport Layer Security (TLS) by default today, thus every search request should be automatically encrypted if TLS is supported by the web browser.[114] On its support website, Google announced that the address encrypted.google.com would be turned off April 30, 2018, stating that all Google products and most new browsers use HTTPS connections as the reason for the discontinuation.[115] Google Real-Time Search was a feature of Google Search in which search results also sometimes included real-time information from sources such as Twitter, Facebook, blogs, and news websites.[116] The feature was introduced on December 7, 2009[117] and went offline on July 2, 2011 after the deal with Twitter expired.[118] Real-Time Search included Facebook status updates beginning on February 24, 2010.[119] A feature similar to Real-Time Search was already available on Microsoft's Bing search engine, which showed results from Twitter and Facebook.[120] The interface for the engine showed a live, descending "river" of posts in the main region (which could be paused or resumed), while a bar chart metric of the frequency of posts containing a certain search term or hashtag was located on the right hand corner of the page above a list of most frequently reposted posts and outgoing links.
Hashtag search links were also supported, as were "promoted" tweets hosted by Twitter (located persistently on top of the river) and thumbnails of retweeted image or video links.
In January 2011, geolocation links of posts were made available alongside results in Real-Time Search.
In addition, posts containing syndicated or attached shortened links were made searchable by the link: query option.
In July 2011 Real-Time Search became inaccessible, with the Real-Time link in the Google sidebar disappearing and a custom 404 error page generated by Google returned at its former URL.
Google originally suggested that the interruption was temporary and related to the launch of Google+;[121] they subsequently announced that it was due to the expiry of a commercial arrangement with Twitter to provide access to tweets.[122] Searches made by search engines, including Google, leave traces.
This raises concerns about privacy.
In principle, if details of a user's searches are found, those with access to the information—principally state agencies responsible for law enforcement and similar matters—can make deductions about the user's activities.
This has been used for the detection and prosecution of lawbreakers; for example a murderer was found and convicted after searching for terms such as "tips with killing with a baseball bat".[123] A search may leave traces both on a computer used to make the search, and in records kept by the search provider.
When using a search engine through a browser program on a computer, search terms and other information may be stored on the computer by default, unless the browser is set not to do this, or they are erased.
Saved terms may be discovered on forensic analysis of the computer.
An Internet Service Provider (ISP) or search engine provider (e.g., Google) may store records which relate search terms to an IP address and a time.[124] Whether such logs are kept, and access to them by law enforcement agencies, is subject to legislation in different jurisdictions and working practices; the law may mandate, prohibit, or say nothing about logging of various types of information.
Some search engines, located in jurisdictions where it is not illegal, make a feature of not storing user search information.[125] The keywords suggested by the Autocomplete feature show a population of users' research which is made possible by an identity management system.
Volumes of personal data are collected via Eddystone web and proximity beacons.[citation needed] Google has been criticized for placing long-term cookies on users' machines to store these preferences, a tactic which also enables them to track a user's search terms and retain the data for more than a year.[126] Since 2012, Google Inc.
has globally introduced encrypted connections for most of its clients, in order to bypass governative blockings of the commercial and IT services.[127] In late June 2011, Google introduced a new look to the Google home page in order to boost the use of the Google+ social tools.[128] One of the major changes was replacing the classic navigation bar with a black one.
Google's digital creative director Chris Wiggins explains: "We're working on a project to bring you a new and improved Google experience, and over the next few months, you'll continue to see more updates to our look and feel."[129] The new navigation bar has been negatively received by a vocal minority.[130] In November 2013, Google started testing yellow labels for advertisements displayed in search results, to improve user experience.
The new labels, highlighted in yellow color, and aligned to the left of each sponsored link help users clearly differentiate between organic and sponsored results.[131] On December 15, 2016, Google rolled out a new desktop search interface that mimics their modular mobile user interface.
The mobile design consists of a tabular design that highlights search features in boxes.
and works by imitating the desktop Knowledge Graph real estate, which appears in the right-hand rail of the search engine result page, these featured elements frequently feature Twitter carousels, People Also Search For, and Top Stories (vertical and horizontal design) modules.
The Local Pack and Answer Box were two of the original features of the Google SERP that were primarily showcased in this manner, but this new layout creates a previously unseen level of design consistency for Google results.[132] In addition to its tool for searching web pages, Google also provides services for searching images, Usenet newsgroups, news websites, videos (Google Videos), searching by locality, maps, and items for sale online.
Google Videos allows searching the World Wide Web for video clips.[133] The service evolved from Google Video, Google's discontinued video hosting service that also allowed to search the web for video clips.[133] In 2012, Google has indexed over 30 trillion web pages, and received 100 billion queries per month.[134] It also caches much of the content that it indexes.
Google operates other tools and services including Google News, Google Shopping, Google Maps, Google Custom Search, Google Earth, Google Docs, Picasa (discontinued), Panoramio (discontinued), YouTube, Google Translate, Google Blog Search and Google Desktop Search.
There are also products available from Google that are not directly search-related.
Gmail, for example, is a webmail application, but still includes search features; Google Browser Sync does not offer any search facilities, although it aims to organize your browsing time.
Also Google starts many new beta products, like Google Social Search or Google Image Swirl.
In 2009, Google claimed that a search query requires altogether about 1 kJ or 0.0003 kW·h,[135] which is enough to raise the temperature of one liter of water by 0.24 °C.
According to green search engine Ecosia, the industry standard for search engines is estimated to be about 0.2 grams of CO2 emission per search.[136] Google's 40,000 searches per second translate to 8 kg CO2 per second or over 252 million kilos of CO2 per year.[137] In 2003, The New York Times complained about Google's indexing, claiming that Google's caching of content on its site infringed its copyright for the content.[138] In both Field v.
Google and Parker v.
Google, the United States District Court of Nevada ruled in favor of Google.[139][140] Google flags search results with the message "This site may harm your computer" if the site is known to install malicious software in the background or otherwise surreptitiously.
For approximately 40 minutes on January 31, 2009, all search results were mistakenly classified as malware and could therefore not be clicked; instead a warning message was displayed and the user was required to enter the requested URL manually.
The bug was caused by human error.[141][142][143][144] The URL of "/" (which expands to all URLs) was mistakenly added to the malware patterns file.[142][143] In 2007, a group of researchers observed a tendency for users to rely on Google Search exclusively for finding information, writing that "With the Google interface the user gets the impression that the search results imply a kind of totality.
...
In fact, one only sees a small part of what one could see if one also integrates other research tools."[145] In 2011, Google Search query results have been shown by Internet activist Eli Pariser to be tailored to users, effectively isolating users in what he defined as a filter bubble.
Pariser holds algorithms used in search engines such as Google Search responsible for catering "a personal ecosystem of information".[146] Although contrasting views have mitigated the potential threat of "informational dystopia" and questioned the scientific nature of Pariser's claims,[147] filter bubbles have been mentioned to account for the surprising results of the U.S.
presidential election in 2016 alongside fake news and echo chambers, suggesting that Facebook and Google have designed personalized online realities in which "we only see and hear what we like".[148] In 2012, the US Federal Trade Commission fined Google US$22.5 million for violating their agreement not to violate the privacy of users of Apple's Safari web browser.[149] The FTC was also continuing to investigate if Google's favoring of their own services in their search results violated antitrust regulations.[150] Google Search engine robots are programmed to use algorithms that understand and predict human behavior.
The book, Race After Technology: Abolitionist Tools for the New Jim Code[151] by Ruha Benjamin talks about human bias as a behavior that the Google Search engine can recognize.
In 2016, some users Google Searched “three Black teenagers” and images of criminal mugshots of young African American teenagers came up.
Then, the users searched “three White teenagers” and were presented with photos of smiling, happy teenagers.
They also searched for “three Asian teenagers,” and very revealing photos of Asian girls and women appeared.
Benjamin came to the conclusion that these results reflect human prejudice and views on different ethnic groups.
A group of analysts explained the concept of a racist computer program: “The idea here is that computers, unlike people, can’t be racist but we’re increasingly learning that they do in fact take after their makers...Some experts believe that this problem might stem from the hidden biases in the massive piles of data that the algorithms process as they learn to recognize patterns...reproducing our worst values”.[151] As people talk about "googling" rather than searching, the company has taken some steps to defend its trademark, in an effort to prevent it from becoming a generic trademark.[152][153] This has led to lawsuits, threats of lawsuits, and the use of euphemisms, such as calling Google Search a famous web search engine.
In digital marketing and online advertising, Spamdexing (also known as search engine spam, search engine poisoning, black-hat search engine optimization (SEO), search spam or web spam)[1] is the deliberate manipulation of search engine indexes.
It involves a number of methods, such as link building and repeating unrelated phrases, to manipulate the relevance or prominence of resources indexed, in a manner inconsistent with the purpose of the indexing system.[2][3] Spamdexing could be considered to be a part of search engine optimization, although there are many search engine optimization methods that improve the quality and appearance of the content of web sites and serve content useful to many users.[4] Search engines use a variety of algorithms to determine relevancy ranking.
Some of these include determining whether the search term appears in the body text or URL of a web page.
Many search engines check for instances of Spamdexing and will remove suspect pages from their indexes.
Also, search-engine operators can quickly block the results listing from entire websites that use Spamdexing, perhaps in response to user complaints of false matches.
The rise of Spamdexing in the mid-1990s made the leading search engines of the time less useful.
Using unethical methods to make websites rank higher in search engine results than they otherwise would is commonly referred to in the SEO (search engine optimization) industry as "black-hat SEO".
These methods are more focused on breaking the search-engine-promotion rules and guidelines.
In addition to this, the perpetrators run the risk of their websites being severely penalized by the Google Panda and Google Penguin search-results ranking algorithms.[5] Common Spamdexing techniques can be classified into two broad classes: content spam[4] (or term spam) and link spam.[3] The earliest known reference[2] to the term Spamdexing is by Eric Convey in his article "Porn sneaks way back on Web," The Boston Herald, May 22, 1996, where he said: The problem arises when site operators load their Web pages with hundreds of extraneous terms so search engines will list them among legitimate addresses.
The process is called "Spamdexing," a combination of spamming — the Internet term for sending users unsolicited information — and "indexing."[2]These techniques involve altering the logical view that a search engine has over the page's contents.
They all aim at variants of the vector space model for information retrieval on text collections.
Keyword stuffing involves the calculated placement of keywords within a page to raise the keyword count, variety, and density of the page.
This is useful to make a page appear to be relevant for a web crawler in a way that makes it more likely to be found.
Example: A promoter of a Ponzi scheme wants to attract web surfers to a site where he advertises his scam.
He places hidden text appropriate for a fan page of a popular music group on his page, hoping that the page will be listed as a fan site and receive many visits from music lovers.
Older versions of indexing programs simply counted how often a keyword appeared, and used that to determine relevance levels.
Most modern search engines have the ability to analyze a page for keyword stuffing and determine whether the frequency is consistent with other sites created specifically to attract search engine traffic.
Also, large webpages are truncated, so that massive dictionary lists cannot be indexed on a single webpage.[citation needed] (However, spammers can circumvent this webpage-size limitation merely by setting up multiple webpages, either independently or linked to each other.) Unrelated hidden text is disguised by making it the same color as the background, using a tiny font size, or hiding it within HTML code such as "no frame" sections, alt attributes, zero-sized DIVs, and "no script" sections.
People manually screening red-flagged websites for a search-engine company might temporarily or permanently block an entire website for having invisible text on some of its pages.
However, hidden text is not always Spamdexing: it can also be used to enhance accessibility.
This involves repeating keywords in the meta tags, and using meta keywords that are unrelated to the site's content.
This tactic has been ineffective since 2005.[citation needed] "Gateway" or doorway pages are low-quality web pages created with very little content, but are instead stuffed with very similar keywords and phrases.
They are designed to rank highly within the search results, but serve no purpose to visitors looking for information.
A doorway page will generally have "click here to enter" on the page; autoforwarding can also be used for this purpose.
In 2006, Google ousted vehicle manufacturer BMW for using "doorway pages" to the company's German site, BMW.de.[6] Scraper sites are created using various programs designed to "scrape" search-engine results pages or other sources of content and create "content" for a website.[citation needed] The specific presentation of content on these sites is unique, but is merely an amalgamation of content taken from other sources, often without permission.
Such websites are generally full of advertising (such as pay-per-click ads), or they redirect the user to other sites.
It is even feasible for scraper sites to outrank original websites for their own information and organization names.
Article spinning involves rewriting existing articles, as opposed to merely scraping content from other sites, to avoid penalties imposed by search engines for duplicate content.
This process is undertaken by hired writers or automated using a thesaurus database or a neural network.
Similarly to article spinning, some sites use machine translation to render their content in several languages, with no human editing, resulting in unintelligible texts that nonetheless continue to be indexed by search engines, thereby attracting traffic.
Publishing web pages that contain information that is unrelated to the title is a misleading practice known as deception.
Despite being a target for penalties from the leading search engines that rank pages, deception is a common practice in some types of sites, including dictionary and encyclopedia sites.
Link spam is defined as links between pages that are present for reasons other than merit.[7] Link spam takes advantage of link-based ranking algorithms, which gives websites higher rankings the more other highly ranked websites link to it.
These techniques also aim at influencing other link-based ranking techniques such as the HITS algorithm.[citation needed] Link farms are tightly-knit networks of websites that link to each other for the sole purpose of gaming the search engine ranking algorithms.
These are also known facetiously as mutual admiration societies.[8] Use of links farms has been greatly reduced after Google launched the first Panda Update in February 2011, which introduced significant improvements in its spam-detection algorithm.
Blog networks (PBNs) are a group of authoritative websites used as a source of contextual links that point to the owner's main website to achieve higher search engine ranking.
Owners of PBN websites use expired domains or auction domains that have backlinks from high-authority websites.
Google targeted and penalized PBN users on several occasions with several massive deindexing campaigns since 2014.[9] Putting hyperlinks where visitors will not see them to increase link popularity.
Highlighted link text can help rank a webpage higher for matching that phrase.
A Sybil attack is the forging of multiple identities for malicious intent, named after the famous multiple personality disorder patient "Sybil".
A spammer may create multiple web sites at different domain names that all link to each other, such as fake blogs (known as spam blogs).
Spam blogs are blogs created solely for commercial promotion and the passage of link authority to target sites.
Often these "splogs" are designed in a misleading manner that will give the effect of a legitimate website but upon close inspection will often be written using spinning software or be very poorly written and barely readable content.
They are similar in nature to link farms.
Guest blog spam is the process of placing guest blogs on websites for the sole purpose of gaining a link to another website or websites.
Unfortunately, these are often confused with legitimate forms of guest blogging with other motives than placing links.
This technique was made famous by Matt Cutts, who publicly declared "war" against this form of link spam.[10] Some link spammers utilize expired domain crawler software or monitor DNS records for domains that will expire soon, then buy them when they expire and replace the pages with links to their pages.
However, it is possible but not confirmed that Google resets the link data on expired domains.[citation needed] To maintain all previous Google ranking data for the domain, it is advisable that a buyer grab the domain before it is "dropped".
Some of these techniques may be applied for creating a Google bomb — that is, to cooperate with other users to boost the ranking of a particular page for a particular query.
Cookie stuffing involves placing an affiliate tracking cookie on a website visitor's computer without their knowledge, which will then generate revenue for the person doing the cookie stuffing.
This not only generates fraudulent affiliate sales, but also has the potential to overwrite other affiliates' cookies, essentially stealing their legitimately earned commissions.
Web sites that can be edited by users can be used by spamdexers to insert links to spam sites if the appropriate anti-spam measures are not taken.
Automated spambots can rapidly make the user-editable portion of a site unusable.
Programmers have developed a variety of automated spam prevention techniques to block or at least slow down spambots.
Spam in blogs is the placing or solicitation of links randomly on other sites, placing a desired keyword into the hyperlinked text of the inbound link.
Guest books, forums, blogs, and any site that accepts visitors' comments are particular targets and are often victims of drive-by spamming where automated software creates nonsense posts with links that are usually irrelevant and unwanted.
Comment spam is a form of link spam that has arisen in web pages that allow dynamic user editing such as wikis, blogs, and guestbooks.
It can be problematic because agents can be written that automatically randomly select a user edited web page, such as a Wikipedia article, and add spamming links.[11] Wiki spam is a form of link spam on wiki pages.
The spammer uses the open editability of wiki systems to place links from the wiki site to the spam site.
The subject of the spam site is often unrelated to the wiki page where the link is added.
Referrer spam takes place when a spam perpetrator or facilitator accesses a web page (the referee), by following a link from another web page (the referrer), so that the referee is given the address of the referrer by the person's Internet browser.
Some websites have a referrer log which shows which pages link to that site.
By having a robot randomly access many sites enough times, with a message or specific address given as the referrer, that message or Internet address then appears in the referrer log of those sites that have referrer logs.
Since some Web search engines base the importance of sites on the number of different sites linking to them, referrer-log spam may increase the search engine rankings of the spammer's sites.
Also, site administrators who notice the referrer log entries in their logs may follow the link back to the spammer's referrer page.
Because of the large amount of spam posted to user-editable webpages, Google proposed a nofollow tag that could be embedded with links.
A link-based search engine, such as Google's PageRank system, will not use the link to increase the score of the linked website if the link carries a nofollow tag.
This ensures that spamming links to user-editable websites will not raise the sites ranking with search engines.
Nofollow is used by several major websites, including Wordpress, Blogger and Wikipedia.[citation needed] A mirror site is the hosting of multiple websites with conceptually similar content but using different URLs.
Some search engines give a higher rank to results where the keyword searched for appears in the URL.
URL redirection is the taking of the user to another page without his or her intervention, e.g., using META refresh tags, Flash, JavaScript, Java or Server side redirects.
However, 301 Redirect, or permanent redirect, is not considered as a malicious behavior.
Cloaking refers to any of several means to serve a page to the search-engine spider that is different from that seen by human users.
It can be an attempt to mislead search engines regarding the content on a particular web site.
Cloaking, however, can also be used to ethically increase accessibility of a site to users with disabilities or provide human users with content that search engines aren't able to process or parse.
It is also used to deliver content based on a user's location; Google itself uses IP delivery, a form of cloaking, to deliver results.
Another form of cloaking is code swapping, i.e., optimizing a page for top ranking and then swapping another page in its place once a top ranking is achieved.
Google refers to these type of redirects as Sneaky Redirects.[12] Spamdexed pages are sometimes eliminated from search results by the search engine.
Users can craft at search keyword, for example, a keyword preceding "-" (minus) will eliminate sites that contains the keyword in their pages or in their domain of URL of the pages from search result.
Example, search keyword "-naver" will eliminate sites that contains word "naver" in their pages and the pages whose domain of URL contains "naver".
Google itself launched the Google Chrome extension "Personal Blocklist (by Google)" in 2011 as part of countermeasures against content farming.[13][14] As of 2018, the extension only works with the PC version of Google Chrome.
Progressive enhancement is a strategy for web design that emphasizes core webpage content first.
This strategy then progressively adds more nuanced and technically rigorous layers of presentation and features on top of the content as the end-user's browser/internet connection allow.
The proposed benefits of this strategy are that it allows everyone to access the basic content and functionality of a web page, using any browser or Internet connection, while also providing an enhanced version of the page to those with more advanced browser software or greater bandwidth.
"Progressive enhancement" was coined by Steven Champeon & Nick Finck at the SXSW Interactive conference on March 11, 2003 in Austin,[1] and through a series of articles for Webmonkey which were published between March and June 2003.[2] Specific Cascading Style Sheets (CSS) techniques pertaining to flexibility of the page layout accommodating different screen resolutions is the concept associated with responsive web design approach.
.net Magazine chose Progressive enhancement as #1 on its list of Top Web Design Trends for 2012 (responsive design was #2).[3] Google has encouraged the adoption of Progressive enhancement to help "our systems (and a wider range of browsers) see usable content and basic functionality when certain web design features are not yet supported".[4] The strategy is an evolution of a previous web design strategy known as graceful degradation, wherein designers would create Web pages for the latest browsers that would also work well in older versions of browser software.
Graceful degradation was supposed to allow the page to "degrade", or remain presentable even if certain technologies assumed by the design were not present, without being jarring to the user of such older software.
In Progressive enhancement (PE) the strategy is deliberately reversed: a basic markup document is created, geared towards the lowest common denominator of browser software functionality, and then the designer adds in functionality or enhancements to the presentation and behavior of the page, using modern technologies such as Cascading Style Sheets, Scalable Vector Graphics (SVG), or JavaScript.
All such enhancements are externally linked, preventing data unusable by certain browsers from being unnecessarily downloaded.[citation needed] The Progressive enhancement approach is derived from Champeon's early experience (c.
1993-4) with Standard Generalized Markup Language (SGML), before working with HTML or any Web presentation languages, as well as from later experiences working with CSS to work around browser bugs.
In those early SGML contexts, semantic markup was of key importance, whereas presentation was nearly always considered separately, rather than being embedded in the markup itself.
This concept is variously referred to in markup circles as the rule of separation of presentation and content, separation of content and style, or of separation of semantics and presentation.
As the Web evolved in the mid-nineties, but before CSS was introduced and widely supported, this cardinal rule of SGML was repeatedly violated by HTML's extenders.
As a result, web designers were forced to adopt new, disruptive technologies and tags in order to remain relevant.[citation needed] With a nod to graceful degradation, in recognition that not everyone had the latest browser, many began to simply adopt design practices and technologies only supported in the most recent and perhaps the single previous major browser releases.
For several years, much of the Web simply did not work in anything but the most recent, most popular browsers.[citation needed] This remained true until the rise and widespread adoption of and support for CSS, as well as many populist, grassroots educational efforts (from Eric Costello, Owen Briggs, Dave Shea, and others) showing Web designers how to use CSS for layout purposes.
Progressive enhancement is based on a recognition that the core assumption behind "graceful degradation" — that browsers always got faster and more powerful — was proving itself false with the rise of handheld and PDA devices with low-functionality browsers and serious bandwidth constraints.
In addition, the rapid evolution of HTML and related technologies in the early days of the Web has slowed, and very old browsers have become obsolete, freeing designers to use powerful technologies such as CSS to manage all presentation tasks and JavaScript to enhance complex client-side behavior.
First proposed as a somewhat less unwieldy catchall phrase to describe the delicate art of "separating document structure and contents from semantics, presentation, and behavior", and based on the then-common use of CSS hacks to work around rendering bugs in specific browsers, the Progressive enhancement strategy has taken on a life of its own as new designers have embraced the idea and extended and revised the approach.[how?] The Progressive enhancement strategy consists of the following core principles: Web pages created according to the principles of Progressive enhancement are by their nature more accessible, because the strategy demands that basic content always be available, not obstructed by commonly unsupported or easily disabled scripting.
Additionally, the sparse markup principle makes it easier for tools that read content aloud to find that content.
It is unclear as to how well Progressive enhancement sites work with older tools designed to deal with table layouts, "tag soup", and the like.[citation needed] Improved results with respect to search engine optimization (SEO) is another side effect of a Progressive enhancement-based Web design strategy.
Because the basic content is always accessible to search engine spiders, pages built with Progressive enhancement methods avoid problems that may hinder search engine indexing.[14] Some skeptics, such as Garret Dimon, have expressed their concern that Progressive enhancement is not workable in situations that rely heavily on JavaScript to achieve certain user interface presentations or behaviors,[15] to which unobtrusive JavaScript is one response.
Others have countered with the point that informational pages should be coded using Progressive enhancement in order to be indexed by spiders,[16] and that even Flash-heavy pages should be coded using Progressive enhancement.[17] In a related area, many have expressed their doubts concerning the principle of the separation of content and presentation in absolute terms, pushing instead for a realistic recognition that the two are inextricably linked.[18][19]
2DaMax Marketing | (646) 762-7511
Contact Us Today!
(646) 762-7511
954 Lexington Ave New York, NY 10021
https://sites.google.com/site/newyorkdigitalmarketingagency/