SEO Web Design

Local SEO Somerset | +44 7976 625722

Contact Us Today!

+44 7976 625722

49B Bridle Way, Barwick, Yeovil, BA22 9TN

https://sites.google.com/site/localseoservicesgold/

http://www.localseoservicescompany.co.uk/

Search engine optimization

Search engine optimization (SEO) is the process of growing the quality and quantity of website traffic by increasing the visibility of a website or a web page to users of a web search engine.[1] SEO refers to the improvement of unpaid results (known as "natural" or "organic" results) and excludes direct traffic and the purchase of paid placement.

Additionally, it may target different kinds of searches, including image search, video search, academic search,[2] news search, and industry-specific vertical search engines.

Promoting a site to increase the number of backlinks, or inbound links, is another SEO tactic.

By May 2015, mobile search had surpassed desktop search.[3] As an Internet marketing strategy, SEO considers how search engines work, the computer-programmed algorithms that dictate search engine behavior, what people search for, the actual search terms or keywords typed into search engines, and which search engines are preferred by their targeted audience.

SEO is performed because a website will receive more visitors from a search engine when website ranks are higher in the search engine results page (SERP).

These visitors can then be converted into customers.[4] SEO differs from local Search engine optimization in that the latter is focused on optimizing a business' online presence so that its web pages will be displayed by search engines when a user enters a local search for its products or services.

The former instead is more focused on national or international searches.

Webmasters and content providers began optimizing websites for search engines in the mid-1990s, as the first search engines were cataloging the early Web.

Initially, all webmasters only needed to submit the address of a page, or URL, to the various engines which would send a web crawler to crawl that page, extract links to other pages from it, and return information found on the page to be indexed.[5] The process involves a search engine spider downloading a page and storing it on the search engine's own server.

A second program, known as an indexer, extracts information about the page, such as the words it contains, where they are located, and any weight for specific words, as well as all links the page contains.

All of this information is then placed into a scheduler for crawling at a later date.

Website owners recognized the value of a high ranking and visibility in search engine results,[6] creating an opportunity for both white hat and black hat SEO practitioners.

According to industry analyst Danny Sullivan, the phrase "Search engine optimization" probably came into use in 1997.

Sullivan credits Bruce Clay as one of the first people to popularize the term.[7] On May 2, 2007,[8] Jason Gambert attempted to trademark the term SEO by convincing the Trademark Office in Arizona[9] that SEO is a "process" involving manipulation of keywords and not a "marketing service." Early versions of search algorithms relied on webmaster-provided information such as the keyword meta tag or index files in engines like ALIWEB.

Meta tags provide a guide to each page's content.

Using metadata to index pages was found to be less than reliable, however, because the webmaster's choice of keywords in the meta tag could potentially be an inaccurate representation of the site's actual content.

Inaccurate, incomplete, and inconsistent data in meta tags could and did cause pages to rank for irrelevant searches.[10][dubious – discuss] Web content providers also manipulated some attributes within the HTML source of a page in an attempt to rank well in search engines.[11] By 1997, search engine designers recognized that webmasters were making efforts to rank well in their search engine, and that some webmasters were even manipulating their rankings in search results by stuffing pages with excessive or irrelevant keywords.

Early search engines, such as Altavista and Infoseek, adjusted their algorithms to prevent webmasters from manipulating rankings.[12] By relying so much on factors such as keyword density which were exclusively within a webmaster's control, early search engines suffered from abuse and ranking manipulation.

To provide better results to their users, search engines had to adapt to ensure their results pages showed the most relevant search results, rather than unrelated pages stuffed with numerous keywords by unscrupulous webmasters.

This meant moving away from heavy reliance on term density to a more holistic process for scoring semantic signals.[13] Since the success and popularity of a search engine is determined by its ability to produce the most relevant results to any given search, poor quality or irrelevant search results could lead users to find other search sources.

Search engines responded by developing more complex ranking algorithms, taking into account additional factors that were more difficult for webmasters to manipulate.

In 2005, an annual conference, AIRWeb (Adversarial Information Retrieval on the Web), was created to bring together practitioners and researchers concerned with Search engine optimization and related topics.[14] Companies that employ overly aggressive techniques can get their client websites banned from the search results.

In 2005, the Wall Street Journal reported on a company, Traffic Power, which allegedly used high-risk techniques and failed to disclose those risks to its clients.[15] Wired magazine reported that the same company sued blogger and SEO Aaron Wall for writing about the ban.[16] Google's Matt Cutts later confirmed that Google did in fact ban Traffic Power and some of its clients.[17] Some search engines have also reached out to the SEO industry, and are frequent sponsors and guests at SEO conferences, webchats, and seminars.

Major search engines provide information and guidelines to help with website optimization.[18][19] Google has a Sitemaps program to help webmasters learn if Google is having any problems indexing their website and also provides data on Google traffic to the website.[20] Bing Webmaster Tools provides a way for webmasters to submit a sitemap and web feeds, allows users to determine the "crawl rate", and track the web pages index status.

In 2015, it was reported that Google was developing and promoting mobile search as a key feature within future products.

In response, many brands began to take a different approach to their Internet marketing strategies.[21] In 1998, two graduate students at Stanford University, Larry Page and Sergey Brin, developed "Backrub", a search engine that relied on a mathematical algorithm to rate the prominence of web pages.

The number calculated by the algorithm, PageRank, is a function of the quantity and strength of inbound links.[22] PageRank estimates the likelihood that a given page will be reached by a web user who randomly surfs the web, and follows links from one page to another.

In effect, this means that some links are stronger than others, as a higher PageRank page is more likely to be reached by the random web surfer.

Page and Brin founded Google in 1998.[23] Google attracted a loyal following among the growing number of Internet users, who liked its simple design.[24] Off-page factors (such as PageRank and hyperlink analysis) were considered as well as on-page factors (such as keyword frequency, meta tags, headings, links and site structure) to enable Google to avoid the kind of manipulation seen in search engines that only considered on-page factors for their rankings.

Although PageRank was more difficult to game, webmasters had already developed link building tools and schemes to influence the Inktomi search engine, and these methods proved similarly applicable to gaming PageRank.

Many sites focused on exchanging, buying, and selling links, often on a massive scale.

Some of these schemes, or link farms, involved the creation of thousands of sites for the sole purpose of link spamming.[25] By 2004, search engines had incorporated a wide range of undisclosed factors in their ranking algorithms to reduce the impact of link manipulation.

In June 2007, The New York Times' Saul Hansell stated Google ranks sites using more than 200 different signals.[26] The leading search engines, Google, Bing, and Yahoo, do not disclose the algorithms they use to rank pages.

Some SEO practitioners have studied different approaches to Search engine optimization, and have shared their personal opinions.[27] Patents related to search engines can provide information to better understand search engines.[28] In 2005, Google began personalizing search results for each user.

Depending on their history of previous searches, Google crafted results for logged in users.[29] In 2007, Google announced a campaign against paid links that transfer PageRank.[30] On June 15, 2009, Google disclosed that they had taken measures to mitigate the effects of PageRank sculpting by use of the nofollow attribute on links.

Matt Cutts, a well-known software engineer at Google, announced that Google Bot would no longer treat any nofollow links, in the same way, to prevent SEO service providers from using nofollow for PageRank sculpting.[31] As a result of this change the usage of nofollow led to evaporation of PageRank.

In order to avoid the above, SEO engineers developed alternative techniques that replace nofollowed tags with obfuscated JavaScript and thus permit PageRank sculpting.

Additionally several solutions have been suggested that include the usage of iframes, Flash and JavaScript.[32] In December 2009, Google announced it would be using the web search history of all its users in order to populate search results.[33] On June 8, 2010 a new web indexing system called Google Caffeine was announced.

Designed to allow users to find news results, forum posts and other content much sooner after publishing than before, Google caffeine was a change to the way Google updated its index in order to make things show up quicker on Google than before.

According to Carrie Grimes, the software engineer who announced Caffeine for Google, "Caffeine provides 50 percent fresher results for web searches than our last index..."[34] Google Instant, real-time-search, was introduced in late 2010 in an attempt to make search results more timely and relevant.

Historically site administrators have spent months or even years optimizing a website to increase search rankings.

With the growth in popularity of social media sites and blogs the leading engines made changes to their algorithms to allow fresh content to rank quickly within the search results.[35] In February 2011, Google announced the Panda update, which penalizes websites containing content duplicated from other websites and sources.

Historically websites have copied content from one another and benefited in search engine rankings by engaging in this practice.

However, Google implemented a new system which punishes sites whose content is not unique.[36] The 2012 Google Penguin attempted to penalize websites that used manipulative techniques to improve their rankings on the search engine.[37] Although Google Penguin has been presented as an algorithm aimed at fighting web spam, it really focuses on spammy links[38] by gauging the quality of the sites the links are coming from.

The 2013 Google Hummingbird update featured an algorithm change designed to improve Google's natural language processing and semantic understanding of web pages.

Hummingbird's language processing system falls under the newly recognized term of "conversational search" where the system pays more attention to each word in the query in order to better match the pages to the meaning of the query rather than a few words.[39] With regards to the changes made to Search engine optimization, for content publishers and writers, Hummingbird is intended to resolve issues by getting rid of irrelevant content and spam, allowing Google to produce high-quality content and rely on them to be 'trusted' authors.

In October 2019, Google announced they would start applying BERT models for english language search queries in the US.

Bidirectional Encoder Representations from Transformers (BERT) was another attempt by Google to improve their natural language processing but this time in order to better understand the search queries of their users.[40] In terms of Search engine optimization, BERT intended to connect users more easily to relevant content and increase the quality of traffic coming to websites that are ranking in the Search Engine Results Page.[41] The leading search engines, such as Google, Bing and Yahoo!, use crawlers to find pages for their algorithmic search results.

Pages that are linked from other search engine indexed pages do not need to be submitted because they are found automatically.

The Yahoo! Directory and DMOZ, two major directories which closed in 2014 and 2017 respectively, both required manual submission and human editorial review.[42] Google offers Google Search Console, for which an XML Sitemap feed can be created and submitted for free to ensure that all pages are found, especially pages that are not discoverable by automatically following links[43] in addition to their URL submission console.[44] Yahoo! formerly operated a paid submission service that guaranteed crawling for a cost per click;[45] however, this practice was discontinued in 2009.

Search engine crawlers may look at a number of different factors when crawling a site.

Not every page is indexed by the search engines.

The distance of pages from the root directory of a site may also be a factor in whether or not pages get crawled.[46] Today, most people are searching on Google using a mobile device.[47] In November 2016, Google announced a major change to the way crawling websites and started to make their index mobile-first, which means the mobile version of a given website becomes the starting point for what Google includes in their index.[48] In May 2019, Google updated the rendering engine of their crawler to be the latest version of Chromium (74 at the time of the announcement).

Google indicated that they would regularly update the Chromium rendering engine to the latest version.

[49] In December of 2019, Google began updating the User-Agent string of their crawler to reflect the latest Chrome version used by their rendering service.

The delay was to allow webmasters time to update their code that responded to particular bot User-Agent strings.

Google ran evaluations and felt confident the impact would be minor.

[50] To avoid undesirable content in the search indexes, webmasters can instruct spiders not to crawl certain files or directories through the standard robots.txt file in the root directory of the domain.

Additionally, a page can be explicitly excluded from a search engine's database by using a meta tag specific to robots (usually <meta name="robots" content="noindex"> ).

When a search engine visits a site, the robots.txt located in the root directory is the first file crawled.

The robots.txt file is then parsed and will instruct the robot as to which pages are not to be crawled.

As a search engine crawler may keep a cached copy of this file, it may on occasion crawl pages a webmaster does not wish crawled.

Pages typically prevented from being crawled include login specific pages such as shopping carts and user-specific content such as search results from internal searches.

In March 2007, Google warned webmasters that they should prevent indexing of internal search results because those pages are considered search spam.[51] A variety of methods can increase the prominence of a webpage within the search results.

Cross linking between pages of the same website to provide more links to important pages may improve its visibility.[52] Writing content that includes frequently searched keyword phrase, so as to be relevant to a wide variety of search queries will tend to increase traffic.[52] Updating content so as to keep search engines crawling back frequently can give additional weight to a site.

Adding relevant keywords to a web page's metadata, including the title tag and meta description, will tend to improve the relevancy of a site's search listings, thus increasing traffic.

URL canonicalization of web pages accessible via multiple URLs, using the canonical link element[53] or via 301 redirects can help make sure links to different versions of the URL all count towards the page's link popularity score.

SEO techniques can be classified into two broad categories: techniques that search engine companies recommend as part of good design ("white hat"), and those techniques of which search engines do not approve ("black hat").

The search engines attempt to minimize the effect of the latter, among them spamdexing.

Industry commentators have classified these methods, and the practitioners who employ them, as either white hat SEO, or black hat SEO.[54] White hats tend to produce results that last a long time, whereas black hats anticipate that their sites may eventually be banned either temporarily or permanently once the search engines discover what they are doing.[55] An SEO technique is considered white hat if it conforms to the search engines' guidelines and involves no deception.

As the search engine guidelines[18][19][56] are not written as a series of rules or commandments, this is an important distinction to note.

White hat SEO is not just about following guidelines but is about ensuring that the content a search engine indexes and subsequently ranks is the same content a user will see.

White hat advice is generally summed up as creating content for users, not for search engines, and then making that content easily accessible to the online "spider" algorithms, rather than attempting to trick the algorithm from its intended purpose.

White hat SEO is in many ways similar to web development that promotes accessibility,[57] although the two are not identical.

Black hat SEO attempts to improve rankings in ways that are disapproved of by the search engines, or involve deception.

One black hat technique uses hidden text, either as text colored similar to the background, in an invisible div, or positioned off screen.

Another method gives a different page depending on whether the page is being requested by a human visitor or a search engine, a technique known as cloaking.

Another category sometimes used is grey hat SEO.

This is in between black hat and white hat approaches, where the methods employed avoid the site being penalized but do not act in producing the best content for users.

Grey hat SEO is entirely focused on improving search engine rankings.

Search engines may penalize sites they discover using black or grey hat methods, either by reducing their rankings or eliminating their listings from their databases altogether.

Such penalties can be applied either automatically by the search engines' algorithms, or by a manual site review.

One example was the February 2006 Google removal of both BMW Germany and Ricoh Germany for use of deceptive practices.[58] Both companies, however, quickly apologized, fixed the offending pages, and were restored to Google's search engine results page.[59] SEO is not an appropriate strategy for every website, and other Internet marketing strategies can be more effective, such as paid advertising through pay per click (PPC) campaigns, depending on the site operator's goals.

Search engine marketing (SEM) is the practice of designing, running and optimizing search engine ad campaigns.[60] Its difference from SEO is most simply depicted as the difference between paid and unpaid priority ranking in search results.

Its purpose regards prominence more so than relevance; website developers should regard SEM with the utmost importance with consideration to visibility as most navigate to the primary listings of their search.[61] A successful Internet marketing campaign may also depend upon building high quality web pages to engage and persuade, setting up analytics programs to enable site owners to measure results, and improving a site's conversion rate.[62] In November 2015, Google released a full 160 page version of its Search Quality Rating Guidelines to the public,[63] which revealed a shift in their focus towards "usefulness" and mobile search.

In recent years the mobile market has exploded, overtaking the use of desktops, as shown in by StatCounter in October 2016 where they analyzed 2.5 million websites and found that 51.3% of the pages were loaded by a mobile device.[64] Google has been one of the companies that are utilizing the popularity of mobile usage by encouraging websites to use their Google Search Console, the Mobile-Friendly Test, which allows companies to measure up their website to the search engine results and how user-friendly it is.

SEO may generate an adequate return on investment.

However, search engines are not paid for organic search traffic, their algorithms change, and there are no guarantees of continued referrals.

Due to this lack of guarantees and certainty, a business that relies heavily on search engine traffic can suffer major losses if the search engines stop sending visitors.[65] Search engines can change their algorithms, impacting a website's placement, possibly resulting in a serious loss of traffic.

According to Google's CEO, Eric Schmidt, in 2010, Google made over 500 algorithm changes – almost 1.5 per day.[66] It is considered a wise business practice for website operators to liberate themselves from dependence on search engine traffic.[67] In addition to accessibility in terms of web crawlers (addressed above), user web accessibility has become increasingly important for SEO.

Optimization techniques are highly tuned to the dominant search engines in the target market.

The search engines' market shares vary from market to market, as does competition.

In 2003, Danny Sullivan stated that Google represented about 75% of all searches.[68] In markets outside the United States, Google's share is often larger, and Google remains the dominant search engine worldwide as of 2007.[69] As of 2006, Google had an 85–90% market share in Germany.[70] While there were hundreds of SEO firms in the US at that time, there were only about five in Germany.[70] As of June 2008, the market share of Google in the UK was close to 90% according to Hitwise.[71] That market share is achieved in a number of countries.

As of 2009, there are only a few large markets where Google is not the leading search engine.

In most cases, when Google is not leading in a given market, it is lagging behind a local player.

The most notable example markets are China, Japan, South Korea, Russia and the Czech Republic where respectively Baidu, Yahoo! Japan, Naver, Yandex and Seznam are market leaders.

Successful search optimization for international markets may require professional translation of web pages, registration of a domain name with a top level domain in the target market, and web hosting that provides a local IP address.

Otherwise, the fundamental elements of search optimization are essentially the same, regardless of language.[70] On October 17, 2002, SearchKing filed suit in the United States District Court, Western District of Oklahoma, against the search engine Google.

SearchKing's claim was that Google's tactics to prevent spamdexing constituted a tortious interference with contractual relations.

On May 27, 2003, the court granted Google's motion to dismiss the complaint because SearchKing "failed to state a claim upon which relief may be granted."[72][73] In March 2006, KinderStart filed a lawsuit against Google over search engine rankings.

KinderStart's website was removed from Google's index prior to the lawsuit, and the amount of traffic to the site dropped by 70%.

On March 16, 2007, the United States District Court for the Northern District of California (San Jose Division) dismissed KinderStart's complaint without leave to amend, and partially granted Google's motion for Rule 11 sanctions against KinderStart's attorney, requiring him to pay part of Google's legal expenses.[74][75]

Web design

Web design encompasses many different skills and disciplines in the production and maintenance of websites.

The different areas of Web design include web graphic design; interface design; authoring, including standardised code and proprietary software; user experience design; and search engine optimization.

Often many individuals will work in teams covering different aspects of the design process, although some designers will cover them all.[1] The term "Web design" is normally used to describe the design process relating to the front-end (client side) design of a website including writing markup.

Web design partially overlaps web engineering in the broader scope of web development.

Web designers are expected to have an awareness of usability and if their role involves creating markup then they are also expected to be up to date with web accessibility guidelines.

Although Web design has a fairly recent history.

It can be linked to other areas such as graphic design, user experience, and multimedia arts, but is more aptly seen from a technological standpoint.

It has become a large part of people's everyday lives.

It is hard to imagine the Internet without animated graphics, different styles of typography, background, and music.

In 1989, whilst working at CERN Tim Berners-Lee proposed to create a global hypertext project, which later became known as the World Wide Web.

During 1991 to 1993 the World Wide Web was born.

Text-only pages could be viewed using a simple line-mode browser.[2] In 1993 Marc Andreessen and Eric Bina, created the Mosaic browser.

At the time there were multiple browsers, however the majority of them were Unix-based and naturally text heavy.

There had been no integrated approach to graphic design elements such as images or sounds.

The Mosaic browser broke this mould.[3] The W3C was created in October 1994 to "lead the World Wide Web to its full potential by developing common protocols that promote its evolution and ensure its interoperability."[4] This discouraged any one company from monopolizing a propriety browser and programming language, which could have altered the effect of the World Wide Web as a whole.

The W3C continues to set standards, which can today be seen with JavaScript and other languages.

In 1994 Andreessen formed Mosaic Communications Corp.

that later became known as Netscape Communications, the Netscape 0.9 browser.

Netscape created its own HTML tags without regard to the traditional standards process.

For example, Netscape 1.1 included tags for changing background colours and formatting text with tables on web pages.

Throughout 1996 to 1999 the browser wars began, as Microsoft and Netscape fought for ultimate browser dominance.

During this time there were many new technologies in the field, notably Cascading Style Sheets, JavaScript, and Dynamic HTML.

On the whole, the browser competition did lead to many positive creations and helped Web design evolve at a rapid pace.[5] In 1996, Microsoft released its first competitive browser, which was complete with its own features and HTML tags.

It was also the first browser to support style sheets, which at the time was seen as an obscure authoring technique and is today an important aspect of Web design.[5] The HTML markup for tables was originally intended for displaying tabular data.

However designers quickly realized the potential of using HTML tables for creating the complex, multi-column layouts that were otherwise not possible.

At this time, as design and good aesthetics seemed to take precedence over good mark-up structure, and little attention was paid to semantics and web accessibility.

HTML sites were limited in their design options, even more so with earlier versions of HTML.

To create complex designs, many Web designers had to use complicated table structures or even use blank spacer .GIF images to stop empty table cells from collapsing.[6] CSS was introduced in December 1996 by the W3C to support presentation and layout.

This allowed HTML code to be semantic rather than both semantic and presentational, and improved web accessibility, see tableless Web design.

In 1996, Flash (originally known as FutureSplash) was developed.

At the time, the Flash content development tool was relatively simple compared to now, using basic layout and drawing tools, a limited precursor to ActionScript, and a timeline, but it enabled Web designers to go beyond the point of HTML, animated GIFs and JavaScript.

However, because Flash required a plug-in, many web developers avoided using it for fear of limiting their market share due to lack of compatibility.

Instead, designers reverted to gif animations (if they didn't forego using motion graphics altogether) and JavaScript for widgets.

But the benefits of Flash made it popular enough among specific target markets to eventually work its way to the vast majority of browsers, and powerful enough to be used to develop entire sites.[6] During 1998 Netscape released Netscape Communicator code under an open source licence, enabling thousands of developers to participate in improving the software.

However, they decided to start from the beginning, which guided the development of the open source browser and soon expanded to a complete application platform.[5] The Web Standards Project was formed and promoted browser compliance with HTML and CSS standards by creating Acid1, Acid2, and Acid3 tests.

2000 was a big year for Microsoft.

Internet Explorer was released for Mac; this was significant as it was the first browser that fully supported HTML 4.01 and CSS 1, raising the bar in terms of standards compliance.

It was also the first browser to fully support the PNG image format.[5] During this time Netscape was sold to AOL and this was seen as Netscape's official loss to Microsoft in the browser wars.[5] Since the start of the 21st century the web has become more and more integrated into peoples lives.

As this has happened the technology of the web has also moved on.

There have also been significant changes in the way people use and access the web, and this has changed how sites are designed.

Since the end of the browsers wars[when?] new browsers have been released.

Many of these are open source meaning that they tend to have faster development and are more supportive of new standards.

The new options are considered by many[weasel words] to be better than Microsoft's Internet Explorer.

The W3C has released new standards for HTML (HTML5) and CSS (CSS3), as well as new JavaScript API's, each as a new but individual standard.[when?] While the term HTML5 is only used to refer to the new version of HTML and some of the JavaScript API's, it has become common to use it to refer to the entire suite of new standards (HTML5, CSS3 and JavaScript).

Web designers use a variety of different tools depending on what part of the production process they are involved in.

These tools are updated over time by newer standards and software but the principles behind them remain the same.

Web designers use both vector and raster graphics editors to create web-formatted imagery or design prototypes.

Technologies used to create websites include W3C standards like HTML and CSS, which can be hand-coded or generated by WYSIWYG editing software.

Other tools Web designers might use include mark up validators[7] and other testing tools for usability and accessibility to ensure their websites meet web accessibility guidelines.[8] Marketing and communication design on a website may identify what works for its target market.

This can be an age group or particular strand of culture; thus the designer may understand the trends of its audience.

Designers may also understand the type of website they are designing, meaning, for example, that (B2B) business-to-business website design considerations might differ greatly from a consumer targeted website such as a retail or entertainment website.

Careful consideration might be made to ensure that the aesthetics or overall design of a site do not clash with the clarity and accuracy of the content or the ease of web navigation,[9] especially on a B2B website.

Designers may also consider the reputation of the owner or business the site is representing to make sure they are portrayed favourably.

User understanding of the content of a website often depends on user understanding of how the website works.

This is part of the user experience design.

User experience is related to layout, clear instructions and labeling on a website.

How well a user understands how they can interact on a site may also depend on the interactive design of the site.

If a user perceives the usefulness of the website, they are more likely to continue using it.

Users who are skilled and well versed with website use may find a more distinctive, yet less intuitive or less user-friendly website interface useful nonetheless.

However, users with less experience are less likely to see the advantages or usefulness of a less intuitive website interface.

This drives the trend for a more universal user experience and ease of access to accommodate as many users as possible regardless of user skill.[10] Much of the user experience design and interactive design are considered in the user interface design.

Advanced interactive functions may require plug-ins if not advanced coding language skills.

Choosing whether or not to use interactivity that requires plug-ins is a critical decision in user experience design.

If the plug-in doesn't come pre-installed with most browsers, there's a risk that the user will have neither the know how or the patience to install a plug-in just to access the content.

If the function requires advanced coding language skills, it may be too costly in either time or money to code compared to the amount of enhancement the function will add to the user experience.

There's also a risk that advanced interactivity may be incompatible with older browsers or hardware configurations.

Publishing a function that doesn't work reliably is potentially worse for the user experience than making no attempt.

It depends on the target audience if it's likely to be needed or worth any risks.

Part of the user interface design is affected by the quality of the page layout.

For example, a designer may consider whether the site's page layout should remain consistent on different pages when designing the layout.

Page pixel width may also be considered vital for aligning objects in the layout design.

The most popular fixed-width websites generally have the same set width to match the current most popular browser window, at the current most popular screen resolution, on the current most popular monitor size.

Most pages are also center-aligned for concerns of aesthetics on larger screens.

Fluid layouts increased in popularity around 2000 as an alternative to HTML-table-based layouts and grid-based design in both page layout design principle and in coding technique, but were very slow to be adopted.[note 1] This was due to considerations of screen reading devices and varying windows sizes which designers have no control over.

Accordingly, a design may be broken down into units (sidebars, content blocks, embedded advertising areas, navigation areas) that are sent to the browser and which will be fitted into the display window by the browser, as best it can.

As the browser does recognize the details of the reader's screen (window size, font size relative to window etc.) the browser can make user-specific layout adjustments to fluid layouts, but not fixed-width layouts.

Although such a display may often change the relative position of major content units, sidebars may be displaced below body text rather than to the side of it.

This is a more flexible display than a hard-coded grid-based layout that doesn't fit the device window.

In particular, the relative position of content blocks may change while leaving the content within the block unaffected.

This also minimizes the user's need to horizontally scroll the page.

Responsive Web design is a newer approach, based on CSS3, and a deeper level of per-device specification within the page's style sheet through an enhanced use of the CSS @media rule.

In March 2018 Google announced they would be rolling out mobile-first indexing.[11]Sites using responsive design are well placed to ensure they meet this new approach.

Web designers may choose to limit the variety of website typefaces to only a few which are of a similar style, instead of using a wide range of typefaces or type styles.

Most browsers recognize a specific number of safe fonts, which designers mainly use in order to avoid complications.

Font downloading was later included in the CSS3 fonts module and has since been implemented in Safari 3.1, Opera 10 and Mozilla Firefox 3.5.

This has subsequently increased interest in web typography, as well as the usage of font downloading.

Most site layouts incorporate negative space to break the text up into paragraphs and also avoid center-aligned text.[12] The page layout and user interface may also be affected by the use of motion graphics.

The choice of whether or not to use motion graphics may depend on the target market for the website.

Motion graphics may be expected or at least better received with an entertainment-oriented website.

However, a website target audience with a more serious or formal interest (such as business, community, or government) might find animations unnecessary and distracting if only for entertainment or decoration purposes.

This doesn't mean that more serious content couldn't be enhanced with animated or video presentations that is relevant to the content.

In either case, motion graphic design may make the difference between more effective visuals or distracting visuals.

Motion graphics that are not initiated by the site visitor can produce accessibility issues.

The World Wide Web consortium accessibility standards require that site visitors be able to disable the animations.[13] Website designers may consider it to be good practice to conform to standards.

This is usually done via a description specifying what the element is doing.

Failure to conform to standards may not make a website unusable or error prone, but standards can relate to the correct layout of pages for readability as well making sure coded elements are closed appropriately.

This includes errors in code, more organized layout for code, and making sure IDs and classes are identified properly.

Poorly-coded pages are sometimes colloquially called tag soup.

Validating via W3C[7] can only be done when a correct DOCTYPE declaration is made, which is used to highlight errors in code.

The system identifies the errors and areas that do not conform to Web design standards.

This information can then be corrected by the user.[14] There are two ways websites are generated: statically or dynamically.

A static website stores a unique file for every page of a static website.

Each time that page is requested, the same content is returned.

This content is created once, during the design of the website.

It is usually manually authored, although some sites use an automated creation process, similar to a dynamic website, whose results are stored long-term as completed pages.

These automatically-created static sites became more popular around 2015, with generators such as Jekyll and Adobe Muse.[15] The benefits of a static website are that they were simpler to host, as their server only needed to serve static content, not execute server-side scripts.

This required less server administration and had less chance of exposing security holes.

They could also serve pages more quickly, on low-cost server hardware.

These advantage became less important as cheap web hosting expanded to also offer dynamic features, and virtual servers offered high performance for short intervals at low cost.

Almost all websites have some static content, as supporting assets such as images and style sheets are usually static, even on a website with highly dynamic pages.

Dynamic websites are generated on the fly and use server-side technology to generate webpages.

They typically extract their content from one or more back-end databases: some are database queries across a relational database to query a catalogue or to summarise numeric information, others may use a document database such as MongoDB or NoSQL to store larger units of content, such as blog posts or wiki articles.

In the design process, dynamic pages are often mocked-up or wireframed using static pages.

The skillset needed to develop dynamic web pages is much broader than for a static pages, involving server-side and database coding as well as client-side interface design.

Even medium-sized dynamic projects are thus almost always a team effort.

When dynamic web pages first developed, they were typically coded directly in languages such as Perl, PHP or ASP.

Some of these, notably PHP and ASP, used a 'template' approach where a server-side page resembled the structure of the completed client-side page and data was inserted into places defined by 'tags'.

This was a quicker means of development than coding in a purely procedural coding language such as Perl.

Both of these approaches have now been supplanted for many websites by higher-level application-focused tools such as content management systems.

These build on top of general purpose coding platforms and assume that a website exists to offer content according to one of several well recognised models, such as a time-sequenced blog, a thematic magazine or news site, a wiki or a user forum.

These tools make the implementation of such a site very easy, and a purely organisational and design-based task, without requiring any coding.

Editing the content itself (as well as the template page) can be done both by means of the site itself, and with the use of third-party software.

The ability to edit all pages is provided only to a specific category of users (for example, administrators, or registered users).

In some cases, anonymous users are allowed to edit certain web content, which is less frequent (for example, on forums - adding messages).

An example of a site with an anonymous change is Wikipedia.

Usability experts, including Jakob Nielsen and Kyle Soucy, have often emphasised homepage design for website success and asserted that the homepage is the most important page on a website.[16][17][18][19] However practitioners into the 2000s were starting to find that a growing number of website traffic was bypassing the homepage, going directly to internal content pages through search engines, e-newsletters and RSS feeds.[20] Leading many practitioners to argue that homepages are less important than most people think.[21][22][23][24] Jared Spool argued in 2007 that a site's homepage was actually the least important page on a website.[25] In 2012 and 2013, carousels (also called 'sliders' and 'rotating banners') have become an extremely popular design element on homepages, often used to showcase featured or recent content in a confined space.[26][27] Many practitioners argue that carousels are an ineffective design element and hurt a website's search engine optimisation and usability.[27][28][29] There are two primary jobs involved in creating a website: the Web designer and web developer, who often work closely together on a website.[30] The Web designers are responsible for the visual aspect, which includes the layout, coloring and typography of a web page.

Web designers will also have a working knowledge of markup languages such as HTML and CSS, although the extent of their knowledge will differ from one Web designer to another.

Particularly in smaller organizations, one person will need the necessary skills for designing and programming the full web page, while larger organizations may have a Web designer responsible for the visual aspect alone.[31] Further jobs which may become involved in the creation of a website include:

Web content vendor

A Web content vendor can be a company or an individual that provides more than one web service.

Usually, a content vendor provides content writing, web design and development, and SEO services.

Website content writer

A Website content writer or web content writer is a person who specializes in providing relevant content for websites.

Every website has a specific target audience and requires the most relevant content to attract business.

Content should contain keywords (specific business-related terms, which internet users might use in order to search for services or products) aimed towards improving a website's SEO.

Generally, a Website content writer who has got this knowledge of SEO is also referred to as an SEO Content Writer.

Most story pieces are centered on marketing products or services, though this is not always the case.

Some websites are informational only and do not sell a product or service.

These websites are often news sites or blogs.

Informational sites educate the reader with complex information that is easy to understand and retain.

There is a growing demand for skilled web content writing on the Internet.

Quality content often translates into higher revenues for online businesses.

Website owners and managers depend on content writers to perform several major tasks: Website content writing aims for relevance and search-ability.

Relevance means that the website text should be useful and beneficial to readers.

Search-ability indicates usage of keywords to help search engines direct users to websites that meet their search criteria.

There are various ways through which websites come up with article writing, and one of them is outsourcing of the content writing.

However, it is riskier than other options, as not all writers can write content specific to the web.

Content can be written for various purposes in various forms.

The most popular forms of content writing are: The content in website differs based on the product or service it is used for.

Writing online is different from composing and constructing content for printed materials.

Web users tend to scan text instead of reading it closely, skipping what they perceive to be unnecessary information and hunting for what they regard as most relevant.

It is estimated that seventy-nine percent of users scan web content.

It is also reported that it takes twenty-five percent more time to scan content online compared to print content.[1] Web content writers must have the skills to insert paragraphs and headlines containing keywords for search engine optimization, as well as to make sure their composition is clear, to reach their target market.

They need to be skilled writers and good at engaging an audience as well as understanding the needs of web users.

Website content writing is frequently outsourced to external providers, such as individual web copywriters or for larger or more complex projects, a specialized digital marketing agency.

It shall be said that most of the content writers also spend time learning about digital marketing with more focus on Search Engine Optimization, Pay Per Click, Social Media Optimization etc.

so that they can develop right content which can help clients with marketing business easily.

Digital marketing agencies combine copy-writing services with a range of editorial and associated services, that may include brand positioning, message consulting, social media, SEO consulting, developmental and copy editing, proofreading, fact checking, layout, content syndication, and design.

Outsourcing allows businesses to focus on core competencies and to benefit from the specialized knowledge of professional copywriters and editors.

Progressive enhancement

Progressive enhancement is a strategy for web design that emphasizes core webpage content first.

This strategy then progressively adds more nuanced and technically rigorous layers of presentation and features on top of the content as the end-user's browser/internet connection allow.

The proposed benefits of this strategy are that it allows everyone to access the basic content and functionality of a web page, using any browser or Internet connection, while also providing an enhanced version of the page to those with more advanced browser software or greater bandwidth.

"Progressive enhancement" was coined by Steven Champeon & Nick Finck at the SXSW Interactive conference on March 11, 2003 in Austin,[1] and through a series of articles for Webmonkey which were published between March and June 2003.[2] Specific Cascading Style Sheets (CSS) techniques pertaining to flexibility of the page layout accommodating different screen resolutions is the concept associated with responsive web design approach.

.net Magazine chose Progressive enhancement as #1 on its list of Top Web Design Trends for 2012 (responsive design was #2).[3] Google has encouraged the adoption of Progressive enhancement to help "our systems (and a wider range of browsers) see usable content and basic functionality when certain web design features are not yet supported".[4] The strategy is an evolution of a previous web design strategy known as graceful degradation, wherein designers would create Web pages for the latest browsers that would also work well in older versions of browser software.

Graceful degradation was supposed to allow the page to "degrade", or remain presentable even if certain technologies assumed by the design were not present, without being jarring to the user of such older software.

In Progressive enhancement (PE) the strategy is deliberately reversed: a basic markup document is created, geared towards the lowest common denominator of browser software functionality, and then the designer adds in functionality or enhancements to the presentation and behavior of the page, using modern technologies such as Cascading Style Sheets, Scalable Vector Graphics (SVG), or JavaScript.

All such enhancements are externally linked, preventing data unusable by certain browsers from being unnecessarily downloaded.

The Progressive enhancement approach is derived from Champeon's early experience (c.

1993-4) with Standard Generalized Markup Language (SGML), before working with HTML or any Web presentation languages, as well as from later experiences working with CSS to work around browser bugs.

In those early SGML contexts, semantic markup was of key importance, whereas presentation was nearly always considered separately, rather than being embedded in the markup itself.

This concept is variously referred to in markup circles as the rule of separation of presentation and content, separation of content and style, or of separation of semantics and presentation.

As the Web evolved in the mid-nineties, but before CSS was introduced and widely supported, this cardinal rule of SGML was repeatedly violated by HTML's extenders.

As a result, web designers were forced to adopt new, disruptive technologies and tags in order to remain relevant.

With a nod to graceful degradation, in recognition that not everyone had the latest browser, many began to simply adopt design practices and technologies only supported in the most recent and perhaps the single previous major browser releases.

For several years, much of the Web simply did not work in anything but the most recent, most popular browsers.

This remained true until the rise and widespread adoption of and support for CSS, as well as many populist, grassroots educational efforts (from Eric Costello, Owen Briggs, Dave Shea, and others) showing Web designers how to use CSS for layout purposes.

Progressive enhancement is based on a recognition that the core assumption behind "graceful degradation" — that browsers always got faster and more powerful — was proving itself false with the rise of handheld and PDA devices with low-functionality browsers and serious bandwidth constraints.

In addition, the rapid evolution of HTML and related technologies in the early days of the Web has slowed, and very old browsers have become obsolete, freeing designers to use powerful technologies such as CSS to manage all presentation tasks and JavaScript to enhance complex client-side behavior.

First proposed as a somewhat less unwieldy catchall phrase to describe the delicate art of "separating document structure and contents from semantics, presentation, and behavior", and based on the then-common use of CSS hacks to work around rendering bugs in specific browsers, the Progressive enhancement strategy has taken on a life of its own as new designers have embraced the idea and extended and revised the approach.

The Progressive enhancement strategy consists of the following core principles: Web pages created according to the principles of Progressive enhancement are by their nature more accessible, because the strategy demands that basic content always be available, not obstructed by commonly unsupported or easily disabled scripting.

Additionally, the sparse markup principle makes it easier for tools that read content aloud to find that content.

It is unclear as to how well Progressive enhancement sites work with older tools designed to deal with table layouts, "tag soup", and the like.[citation needed] Improved results with respect to search engine optimization (SEO) is another side effect of a Progressive enhancement-based Web design strategy.

Because the basic content is always accessible to search engine spiders, pages built with Progressive enhancement methods avoid problems that may hinder search engine indexing.[14] Some skeptics, such as Garret Dimon, have expressed their concern that Progressive enhancement is not workable in situations that rely heavily on JavaScript to achieve certain user interface presentations or behaviors,[15] to which unobtrusive JavaScript is one response.

Others have countered with the point that informational pages should be coded using Progressive enhancement in order to be indexed by spiders,[16] and that even Flash-heavy pages should be coded using Progressive enhancement.[17] In a related area, many have expressed their doubts concerning the principle of the separation of content and presentation in absolute terms, pushing instead for a realistic recognition that the two are inextricably linked.[18][19]

Logical Position

Logical Position is a digital marketing and SEO company based in Lake Oswego, Oregon[1] with other offices in the United States.[2][3] Logical Position was founded by Michael Weinhouse in 2009.[4] The company specializes in pay-per-click management (PPC) and search engine optimization (SEO) along with offering web design and graphic design services.[5] Logical Position has grown rapidly since its founding growing 793% between 2010 and 2015.[6][7] In 2015, the company continued its growth by acquiring Virtue Advertising in Chicago and TQE Marketing in Oregon.

It ranked in Inc.

Magazine’s list of 5000 Fastest Growing Private Companies in America.[8] The company has also been listed in the top 20 of Portland Business Journal’s list of Fastest Growing Private Companies.[9][7] Logical Position has received numerous awards for its company culture.

It ranked #3 in Inc.

Magazine’s list of Best Workplaces in America (2017).[10] The Oregonian ranked it as a Top Workplace (2013, 2015, 2016, 2017).[11] The company also appeared in Oregon Business’s list of 100 Best Companies.[12] To promote employee engagement and community events, Logical Position created an employee-led council known as A.C.E.S.

(Activities, Community, Education and Social).[13] The council has organized service projects for the hungry,[14][15] environmental clean-up efforts,[16] and blood drives.[17] The company was a finalist for the Oregon Business Ethics Award in 2017.[18] Logical Position has earned a number of accolades within the pay-per-click industry.

It is a Premier Google Partner and Bing Elite SMB Partner.[19] Bing named it Partner of the Year for the Americas in 2018.[20] It was also named Bing’s Growth Channel Partner of the Year in 2017.[21] Logical Position has ranked #4 on TopSEOs.com’s list of 100 Top Pay-Per-Click Management Services.[22] The company is active in sponsoring industry-wide events.[23]

BlackHatWorld

BlackHatWorld (BHW) is an internet forum owned by Damien Trevatt focused on black-hat search engine optimization (SEO) techniques and services, often known as spamdexing.[1] Site services are varied, including copywriting; graphic design; web design; SEO, including both onpage and offpage optimization; social media marketing; and app development.[1] Other site services include bulk account registration,[3] unconventional money making scams,[4] video game bots,[5] and developments in the SEO space.[5][6] BlackHatWorld has a community of users who express an interest in online marketing and digital business, and is a place where people are able to share ideas and seek advice from other members.

BlackHatWorld is considered[by whom?] one of the biggest online forums for digital marketing.[citation needed] The forum does not only focus on black hat marketing practices, but also has sections dedicated to white hat activities.[7]

Web content management system

A Web content management system (WCM or WCMS)[1] is a software content management system (CMS) specifically for web content.

It provides website authoring, collaboration, and administration tools that help users with little knowledge of web programming languages or markup languages create and manage website content.

A WCMS provides the foundation for collaboration, providing users the ability to manage documents and output for multiple author editing and participation.

Most systems use a content repository or a database to store page content, metadata, and other information assets the system needs.

A presentation layer (template engine) displays the content to website visitors based on a set of templates, which are sometimes XSLT files.[2] Most systems use server side caching to improve performance.

This works best when the WCMS is not changed often but visits happen regularly.

Administration is also typically done through browser-based interfaces, but some systems require the use of a fat client.

A Web content management system controls a dynamic collection of web material, including HTML documents, images, and other forms of media.[3] A WCMS facilitates document control, auditing, editing, and timeline management.

A WCMS typically has the following features:[4][5] A WCMS can use one of three approaches: offline processing, online processing, and hybrid processing.

These terms describe the deployment pattern for the WCMS in terms of when it applies presentation templates to render web pages from structured content.

These systems, sometimes referred to as "static site generators",[7] pre-process all content, applying templates before publication to generate web pages.

Since pre-processing systems do not require a server to apply the templates at request time, they may also exist purely as design-time tools.

These systems apply templates on-demand.

They may generate HTML when a user visits the page, or the user might receive pre-generated HTML from a web cache.

Most open source WCMSs support add-ons that extended the system's capabilities.

These include features like forums, blogs, wikis, web stores, photo galleries, and contact management.

These are variously called modules, nodes, widgets, add-ons, or extensions.

Some systems combine the offline and online approaches.

Some systems write out executable code (e.g., JSP, ASP, PHP, ColdFusion, or Perl pages) rather than just static HTML.

That way, personnel don't have to deploy the WCMS itself on every web server.

Other hybrids operate in either an online or offline mode.

Designing with Web Standards

Designing with Web Standards, first published in 2003 with revised editions in 2007 and 2009, is a web development book by Jeffrey Zeldman.

The book’s audience is primarily web development professionals who aim to produce design work that complies with web standards.

The work is used as a textbook in over 85 colleges.

Written by Jeffrey Zeldman, a leading proponent of standards-compliant web design, Designing with Web Standards guides the reader on how to better utilize web standards pragmatically to create accessible, user-friendly web sites.

Designing with Web Standards reiterates many of the arguments previously advanced by the Web Standards Project to highlight the benefits of standards-compliant web design.[1] The book first came out in 2003,[2] and appeared in two revised editions, one in 2007,[3] and another, co-authored with Ethan Marcotte, in 2009.[4] Also in 2009, a companion volume appeared from the same publisher under the title Developing with Web Standards, written by John Allsopp.[5] The book’s third and most recent edition was released on October 25, 2009, by New Riders Press.

It has received generally positive feedback, with a four out of five-star rating on Amazon.com from 137 reviewers.

Reviewers have noted that the witty, conversational tone of the book mixed with the in-depth technical analysis is enough “to keep you turning the pages.”[6] Amazon.com book reviewer David Wall[6] notes that the book is “a fantastic education that any design professional will appreciate.” A wall goes on to praise Zeldman’s pragmatic approach, as well as the “tightly focused tips” he provides and bolsters with code examples to illustrate his point.

Some critics have said that the book is aimed more at web design novices and mentions a few out-of-date browsers, and is devoid of a lot of detail.[citation needed] The first half of Zeldman's Designing with Web Standards in 2003 consolidated the case for web standards in terms of accessibility, search engine optimization, portability of content with an eye toward mobile and other emerging environments, lowered bandwidth and production cost, and other benefits.

This section of the book addressed marketers and site owners as well as web developers and designers.

The second section of the book was a how-to for designers and developers.

How-to books were common in the web industry, although almost none at the time taught web standards.

What made the first edition of Designing with Web Standards unique was its focus on making the case for forwarding compatibility, accessibility, and SEO to all who own, manage or use web sites, not just developers.

The book is credited with converting the industry from tag soup and Flash to semantics and accessibility via correct use of HTML, CSS, and JavaScript.

Subsequent editions, while continuing to address the state of the Web and the benefits of standards-based design, have also focused on emerging technologies such as HTML5 and CSS3, and on emerging design strategies such as Responsive Web Design (RWD) and "Mobile First." The book cover famously showed Zeldman with a blue knit hat, which inspired Douglas Vos to invent the Blue Beanie Day[7], an annual international celebration of web standards which began in 2007[8].

Web standards simply mean web layout rules which are very basics and important for every web developer.Tutorials Bites Designing with Web Standards has been translated into 15 different languages, including (for the last edition) Italian, Chinese, Hungarian, Polish and Portuguese.

Web content development

Web content development is the process of researching, writing, gathering, organizing, and editing information for publication on websites.

Website content may consist of prose, graphics, pictures, recordings, movies, or other digital assets that could be distributed by a hypertext transfer protocol server, and viewed by a web browser.

When the World Wide Web began, web developers either developed online content themselves, or modified existing documents and coded them into hypertext markup language (HTML).

In time, the field of website development came to encompass many technologies, so it became difficult for website developers to maintain so many different skills.

Content developers are specialized website developers who have content generation skills such as graphic design, multimedia development, professional writing, and documentation.

They can integrate content into new or existing websites without using information technology skills such as script language programming and database programming.

Content developers or technical content developers can also be technical writers who produce technical documentation that helps people understand and use a product or service.

This documentation includes online help, manuals, white papers, design specifications, developer guides, deployment guides, release notes, etc.

Content developers may also be search engine optimization specialists, or internet marketing professionals.

High quality, unique content is what search engines are looking for.

Content development specialists, therefore, have a very important role to play in the search engine optimization process.

One issue currently plaguing the world of Web content development is keyword-stuffed content which are prepared solely for the purpose of manipulating search engine rankings.

The effect is that content is written to appeal to search engine (algorithms) rather than human readers.

Search engine optimization specialists commonly submit content to article directories to build their website's authority on any given topic.

Most article directories allow visitors to republish submitted content with the agreement that all links are maintained.

This has become a method of search engine optimization for many websites today.

If written according to SEO copywriting rules, the submitted content will bring benefits to the publisher (free SEO-friendly content for a webpage) as well as to the author (a hyperlink pointing to his/her website, placed on an SEO-friendly webpage).[1] Web content is no longer restricted to text.

Search engines now index audio/visual media, including video, images, PDFs, and other elements of a web page.

Website owners sometimes use content protection networks to scan for plagiarized content.

Spamdexing

In digital marketing and online advertising, Spamdexing (also known as search engine spam, search engine poisoning, black-hat search engine optimization (SEO), search spam or web spam)[1] is the deliberate manipulation of search engine indexes.

It involves a number of methods, such as link building and repeating unrelated phrases, to manipulate the relevance or prominence of resources indexed, in a manner inconsistent with the purpose of the indexing system.[2][3] Spamdexing could be considered to be a part of search engine optimization, although there are many search engine optimization methods that improve the quality and appearance of the content of web sites and serve content useful to many users.[4] Search engines use a variety of algorithms to determine relevancy ranking.

Some of these include determining whether the search term appears in the body text or URL of a web page.

Many search engines check for instances of Spamdexing and will remove suspect pages from their indexes.

Also, search-engine operators can quickly block the results listing from entire websites that use Spamdexing, perhaps in response to user complaints of false matches.

The rise of Spamdexing in the mid-1990s made the leading search engines of the time less useful.

Using unethical methods to make websites rank higher in search engine results than they otherwise would is commonly referred to in the SEO (search engine optimization) industry as "black-hat SEO".

These methods are more focused on breaking the search-engine-promotion rules and guidelines.

In addition to this, the perpetrators run the risk of their websites being severely penalized by the Google Panda and Google Penguin search-results ranking algorithms.[5] Common Spamdexing techniques can be classified into two broad classes: content spam[4] (or term spam) and link spam.[3] The earliest known reference[2] to the term Spamdexing is by Eric Convey in his article "Porn sneaks way back on Web," The Boston Herald, May 22, 1996, where he said: The problem arises when site operators load their Web pages with hundreds of extraneous terms so search engines will list them among legitimate addresses.

The process is called "Spamdexing," a combination of spamming — the Internet term for sending users unsolicited information — and "indexing."[2]These techniques involve altering the logical view that a search engine has over the page's contents.

They all aim at variants of the vector space model for information retrieval on text collections.

Keyword stuffing involves the calculated placement of keywords within a page to raise the keyword count, variety, and density of the page.

This is useful to make a page appear to be relevant for a web crawler in a way that makes it more likely to be found.

Example: A promoter of a Ponzi scheme wants to attract web surfers to a site where he advertises his scam.

He places hidden text appropriate for a fan page of a popular music group on his page, hoping that the page will be listed as a fan site and receive many visits from music lovers.

Older versions of indexing programs simply counted how often a keyword appeared, and used that to determine relevance levels.

Most modern search engines have the ability to analyze a page for keyword stuffing and determine whether the frequency is consistent with other sites created specifically to attract search engine traffic.

Also, large webpages are truncated, so that massive dictionary lists cannot be indexed on a single webpage.[citation needed] (However, spammers can circumvent this webpage-size limitation merely by setting up multiple webpages, either independently or linked to each other.) Unrelated hidden text is disguised by making it the same color as the background, using a tiny font size, or hiding it within HTML code such as "no frame" sections, alt attributes, zero-sized DIVs, and "no script" sections.

People manually screening red-flagged websites for a search-engine company might temporarily or permanently block an entire website for having invisible text on some of its pages.

However, hidden text is not always Spamdexing: it can also be used to enhance accessibility.

This involves repeating keywords in the meta tags, and using meta keywords that are unrelated to the site's content.

This tactic has been ineffective since 2005.[citation needed] "Gateway" or doorway pages are low-quality web pages created with very little content, but are instead stuffed with very similar keywords and phrases.

They are designed to rank highly within the search results, but serve no purpose to visitors looking for information.

A doorway page will generally have "click here to enter" on the page; autoforwarding can also be used for this purpose.

In 2006, Google ousted vehicle manufacturer BMW for using "doorway pages" to the company's German site, BMW.de.[6] Scraper sites are created using various programs designed to "scrape" search-engine results pages or other sources of content and create "content" for a website.[citation needed] The specific presentation of content on these sites is unique, but is merely an amalgamation of content taken from other sources, often without permission.

Such websites are generally full of advertising (such as pay-per-click ads), or they redirect the user to other sites.

It is even feasible for scraper sites to outrank original websites for their own information and organization names.

Article spinning involves rewriting existing articles, as opposed to merely scraping content from other sites, to avoid penalties imposed by search engines for duplicate content.

This process is undertaken by hired writers or automated using a thesaurus database or a neural network.

Similarly to article spinning, some sites use machine translation to render their content in several languages, with no human editing, resulting in unintelligible texts that nonetheless continue to be indexed by search engines, thereby attracting traffic.

Publishing web pages that contain information that is unrelated to the title is a misleading practice known as deception.

Despite being a target for penalties from the leading search engines that rank pages, deception is a common practice in some types of sites, including dictionary and encyclopedia sites.

Link spam is defined as links between pages that are present for reasons other than merit.[7] Link spam takes advantage of link-based ranking algorithms, which gives websites higher rankings the more other highly ranked websites link to it.

These techniques also aim at influencing other link-based ranking techniques such as the HITS algorithm.[citation needed] Link farms are tightly-knit networks of websites that link to each other for the sole purpose of gaming the search engine ranking algorithms.

These are also known facetiously as mutual admiration societies.[8] Use of links farms has been greatly reduced after Google launched the first Panda Update in February 2011, which introduced significant improvements in its spam-detection algorithm.

Blog networks (PBNs) are a group of authoritative websites used as a source of contextual links that point to the owner's main website to achieve higher search engine ranking.

Owners of PBN websites use expired domains or auction domains that have backlinks from high-authority websites.

Google targeted and penalized PBN users on several occasions with several massive deindexing campaigns since 2014.[9] Putting hyperlinks where visitors will not see them to increase link popularity.

Highlighted link text can help rank a webpage higher for matching that phrase.

A Sybil attack is the forging of multiple identities for malicious intent, named after the famous multiple personality disorder patient "Sybil".

A spammer may create multiple web sites at different domain names that all link to each other, such as fake blogs (known as spam blogs).

Spam blogs are blogs created solely for commercial promotion and the passage of link authority to target sites.

Often these "splogs" are designed in a misleading manner that will give the effect of a legitimate website but upon close inspection will often be written using spinning software or be very poorly written and barely readable content.

They are similar in nature to link farms.

Guest blog spam is the process of placing guest blogs on websites for the sole purpose of gaining a link to another website or websites.

Unfortunately, these are often confused with legitimate forms of guest blogging with other motives than placing links.

This technique was made famous by Matt Cutts, who publicly declared "war" against this form of link spam.[10] Some link spammers utilize expired domain crawler software or monitor DNS records for domains that will expire soon, then buy them when they expire and replace the pages with links to their pages.

However, it is possible but not confirmed that Google resets the link data on expired domains.[citation needed] To maintain all previous Google ranking data for the domain, it is advisable that a buyer grab the domain before it is "dropped".

Some of these techniques may be applied for creating a Google bomb — that is, to cooperate with other users to boost the ranking of a particular page for a particular query.

Cookie stuffing involves placing an affiliate tracking cookie on a website visitor's computer without their knowledge, which will then generate revenue for the person doing the cookie stuffing.

This not only generates fraudulent affiliate sales, but also has the potential to overwrite other affiliates' cookies, essentially stealing their legitimately earned commissions.

Web sites that can be edited by users can be used by spamdexers to insert links to spam sites if the appropriate anti-spam measures are not taken.

Automated spambots can rapidly make the user-editable portion of a site unusable.

Programmers have developed a variety of automated spam prevention techniques to block or at least slow down spambots.

Spam in blogs is the placing or solicitation of links randomly on other sites, placing a desired keyword into the hyperlinked text of the inbound link.

Guest books, forums, blogs, and any site that accepts visitors' comments are particular targets and are often victims of drive-by spamming where automated software creates nonsense posts with links that are usually irrelevant and unwanted.

Comment spam is a form of link spam that has arisen in web pages that allow dynamic user editing such as wikis, blogs, and guestbooks.

It can be problematic because agents can be written that automatically randomly select a user edited web page, such as a Wikipedia article, and add spamming links.[11] Wiki spam is a form of link spam on wiki pages.

The spammer uses the open editability of wiki systems to place links from the wiki site to the spam site.

The subject of the spam site is often unrelated to the wiki page where the link is added.

Referrer spam takes place when a spam perpetrator or facilitator accesses a web page (the referee), by following a link from another web page (the referrer), so that the referee is given the address of the referrer by the person's Internet browser.

Some websites have a referrer log which shows which pages link to that site.

By having a robot randomly access many sites enough times, with a message or specific address given as the referrer, that message or Internet address then appears in the referrer log of those sites that have referrer logs.

Since some Web search engines base the importance of sites on the number of different sites linking to them, referrer-log spam may increase the search engine rankings of the spammer's sites.

Also, site administrators who notice the referrer log entries in their logs may follow the link back to the spammer's referrer page.

Because of the large amount of spam posted to user-editable webpages, Google proposed a nofollow tag that could be embedded with links.

A link-based search engine, such as Google's PageRank system, will not use the link to increase the score of the linked website if the link carries a nofollow tag.

This ensures that spamming links to user-editable websites will not raise the sites ranking with search engines.

Nofollow is used by several major websites, including Wordpress, Blogger and Wikipedia.[citation needed] A mirror site is the hosting of multiple websites with conceptually similar content but using different URLs.

Some search engines give a higher rank to results where the keyword searched for appears in the URL.

URL redirection is the taking of the user to another page without his or her intervention, e.g., using META refresh tags, Flash, JavaScript, Java or Server side redirects.

However, 301 Redirect, or permanent redirect, is not considered as a malicious behavior.

Cloaking refers to any of several means to serve a page to the search-engine spider that is different from that seen by human users.

It can be an attempt to mislead search engines regarding the content on a particular web site.

Cloaking, however, can also be used to ethically increase accessibility of a site to users with disabilities or provide human users with content that search engines aren't able to process or parse.

It is also used to deliver content based on a user's location; Google itself uses IP delivery, a form of cloaking, to deliver results.

Another form of cloaking is code swapping, i.e., optimizing a page for top ranking and then swapping another page in its place once a top ranking is achieved.

Google refers to these type of redirects as Sneaky Redirects.[12] Spamdexed pages are sometimes eliminated from search results by the search engine.

Users can craft at search keyword, for example, a keyword preceding "-" (minus) will eliminate sites that contains the keyword in their pages or in their domain of URL of the pages from search result.

Example, search keyword "-naver" will eliminate sites that contains word "naver" in their pages and the pages whose domain of URL contains "naver".

Google itself launched the Google Chrome extension "Personal Blocklist (by Google)" in 2011 as part of countermeasures against content farming.[13][14] As of 2018, the extension only works with the PC version of Google Chrome.

Search engine marketing

Search engine marketing (SEM) is a form of Internet marketing that involves the promotion of websites by increasing their visibility in search engine results pages (SERPs) primarily through paid advertising.[1] SEM may incorporate search engine optimization (SEO), which adjusts or rewrites website content and site architecture to achieve a higher ranking in search engine results pages to enhance pay per click (PPC) listings.[2] In 2007, U.S.

advertisers spent US $24.6 billion on Search engine marketing.[3] In Q2 2015, Google (73.7%) and the Yahoo/Bing (26.3%) partnership accounted for almost 100% of U.S.

search engine spend.[4] As of 2006, SEM was growing much faster than traditional advertising and even other channels of online marketing.[5] Managing search campaigns is either done directly with the SEM vendor or through an SEM tool provider.

It may also be self-serve or through an advertising agency.

As of October 2016, Google leads the global search engine market with a market share of 89.3%.

Bing comes second with a market share of 4.36%, Yahoo comes third with a market share of 3.3%, and Chinese search engine Baidu is fourth globally with a share of about 0.68%.[6] As the number of sites on the Web increased in the mid-to-late 1990s, search engines started appearing to help people find information quickly.

Search engines developed business models to finance their services, such as pay per click programs offered by Open Text[7] in 1996 and then Goto.com[8] in 1998.

Goto.com later changed its name[9] to Overture in 2001, was purchased by Yahoo! in 2003, and now offers paid search opportunities for advertisers through Yahoo! Search Marketing.

Google also began to offer advertisements on search results pages in 2000 through the Google AdWords program.

By 2007, pay-per-click programs proved to be primary moneymakers[10] for search engines.

In a market dominated by Google, in 2009 Yahoo! and Microsoft announced the intention to forge an alliance.

The Yahoo! & Microsoft Search Alliance eventually received approval from regulators in the US and Europe in February 2010.[11] Search engine optimization consultants expanded their offerings to help businesses learn about and use the advertising opportunities offered by search engines, and new agencies focusing primarily upon marketing and advertising through search engines emerged.

The term "Search engine marketing" was popularized by Danny Sullivan in 2001[12] to cover the spectrum of activities involved in performing SEO, managing paid listings at the search engines, submitting sites to directories, and developing online marketing strategies for businesses, organizations, and individuals.

Search engine marketing uses at least five methods and metrics to optimize websites.[citation needed] Search engine marketing is a way to create and edit a website so that search engines rank it higher than other pages.

It should be also focused on keyword marketing or pay-per-click advertising (PPC).

The technology enables advertisers to bid on specific keywords or phrases and ensures ads appear with the results of search engines.

With the development of this system, the price is growing under a high level of competition.

Many advertisers prefer to expand their activities, including increasing search engines and adding more keywords.

The more advertisers are willing to pay for clicks, the higher the ranking for advertising, which leads to higher traffic.[15] PPC comes at a cost.

The higher position is likely to cost $5 for a given keyword, and $4.50 for a third location.

A third advertiser earns 10% less than the top advertiser while reducing traffic by 50%.[15] Investors must consider their return on investment when engaging in PPC campaigns.

Buying traffic via PPC will deliver a positive ROI when the total cost-per-click for a single conversion remains below the profit margin.

That way the amount of money spent to generate revenue is below the actual revenue generated.[16] A positive ROI is the outcome.

There are many reasons explaining why advertisers choose the SEM strategy.

First, creating a SEM account is easy and can build traffic quickly based on the degree of competition.

The shopper who uses the search engine to find information tends to trust and focus on the links showed in the results pages.

However, a large number of online sellers do not buy search engine optimization to obtain higher ranking lists of search results but prefer paid links.

A growing number of online publishers are allowing search engines such as Google to crawl content on their pages and place relevant ads on it.[17] From an online seller's point of view, this is an extension of the payment settlement and an additional incentive to invest in paid advertising projects.

Therefore, it is virtually impossible for advertisers with limited budgets to maintain the highest rankings in the increasingly competitive search market.

Google's Search engine marketing is one of the western world's marketing leaders, while its Search engine marketing is its biggest source of profit.[18] Google's search engine providers are clearly ahead of the Yahoo and Bing network.

The display of unknown search results is free, while advertisers are willing to pay for each click of the ad in the sponsored search results.

Paid inclusion involves a search engine company charging fees for the inclusion of a website in their results pages.

Also known as sponsored listings, paid inclusion products are provided by most search engine companies either in the main results area or as a separately identified advertising area.

The fee structure is both a filter against superfluous submissions and a revenue generator.

Typically, the fee covers an annual subscription for one webpage, which will automatically be catalogued on a regular basis.

However, some companies are experimenting with non-subscription based fee structures where purchased listings are displayed permanently.

A per-click fee may also apply.

Each search engine is different.

Some sites allow only paid inclusion, although these have had little success.

More frequently, many search engines, like Yahoo!,[19] mix paid inclusion (per-page and per-click fee) with results from web crawling.

Others, like Google (and as of 2006, Ask.com[20][21]), do not let webmasters pay to be in their search engine listing (advertisements are shown separately and labeled as such).

Some detractors of paid inclusion allege that it causes searches to return results based more on the economic standing of the interests of a web site, and less on the relevancy of that site to end-users.

Often the line between pay per click advertising and paid inclusion is debatable.

Some have lobbied for any paid listings to be labeled as an advertisement, while defenders insist they are not actually ads since the webmasters do not control the content of the listing, its ranking, or even whether it is shown to any users.

Another advantage of paid inclusion is that it allows site owners to specify particular schedules for crawling pages.

In the general case, one has no control as to when their page will be crawled or added to a search engine index.

Paid inclusion proves to be particularly useful for cases where pages are dynamically generated and frequently modified.

Paid inclusion is a Search engine marketing method in itself, but also a tool of search engine optimization since experts and firms can test out different approaches to improving ranking and see the results often within a couple of days, instead of waiting weeks or months.

Knowledge gained this way can be used to optimize other web pages, without paying the search engine company.

SEM is the wider discipline that incorporates SEO.

SEM includes both paid search results (using tools like Google Adwords or Bing Ads, formerly known as Microsoft adCenter) and organic search results (SEO).

SEM uses paid advertising with AdWords or Bing Ads, pay per click (particularly beneficial for local providers as it enables potential consumers to contact a company directly with one click), article submissions, advertising and making sure SEO has been done.

A keyword analysis is performed for both SEO and SEM, but not necessarily at the same time.

SEM and SEO both need to be monitored and updated frequently to reflect evolving best practices.

In some contexts, the term SEM is used exclusively to mean pay per click advertising,[2] particularly in the commercial advertising and marketing communities which have a vested interest in this narrow definition.

Such usage excludes the wider search marketing community that is engaged in other forms of SEM such as search engine optimization and search retargeting.

Creating the link between SEO and PPC represents an integral part of the SEM concept.

Sometimes, especially when separate teams work on SEO and PPC and the efforts are not synced, positive results of aligning their strategies can be lost.

The aim of both SEO and PPC is maximizing the visibility in search and thus, their actions to achieve it should be centrally coordinated.

Both teams can benefit from setting shared goals and combined metrics, evaluating data together to determine future strategy or discuss which of the tools works better to get the traffic for selected keywords in the national and local search results.

Thanks to this, the search visibility can be increased along with optimizing both conversions and costs.[22] Another part of SEM is social media marketing (SMM).

SMM is a type of marketing that involves exploiting social media to influence consumers that one company’s products and/or services are valuable.[23] Some of the latest theoretical advances include Search engine marketing management (SEMM).

SEMM relates to activities including SEO but focuses on return on investment (ROI) management instead of relevant traffic building (as is the case of mainstream SEO).

SEMM also integrates organic SEO, trying to achieve top ranking without using paid means to achieve it, and pay per click SEO.

For example, some of the attention is placed on the web page layout design and how content and information is displayed to the website visitor.

SEO & SEM are two pillars of one marketing job and they both run side by side to produce much better results than focusing on only one pillar.

Paid search advertising has not been without controversy and the issue of how search engines present advertising on their search result pages has been the target of a series of studies and reports[24][25][26] by Consumer Reports WebWatch.

The Federal Trade Commission (FTC) also issued a letter[27] in 2002 about the importance of disclosure of paid advertising on search engines, in response to a complaint from Commercial Alert, a consumer advocacy group with ties to Ralph Nader.

Another ethical controversy associated with search marketing has been the issue of trademark infringement.

The debate as to whether third parties should have the right to bid on their competitors' brand names has been underway for years.

In 2009 Google changed their policy, which formerly prohibited these tactics, allowing 3rd parties to bid on branded terms as long as their landing page in fact provides information on the trademarked term.[28] Though the policy has been changed this continues to be a source of heated debate.[29] On April 24, 2012, many started to see that Google has started to penalize companies that are buying links for the purpose of passing off the rank.

The Google Update was called Penguin.

Since then, there have been several different Penguin/Panda updates rolled out by Google.

SEM has, however, nothing to do with link buying and focuses on organic SEO and PPC management.

As of October 20, 2014, Google had released three official revisions of their Penguin Update.

In 2013, the Tenth Circuit Court of Appeals held in Lens.com, Inc.

v.

1-800 Contacts, Inc.

that online contact lens seller Lens.com did not commit trademark infringement when it purchased search advertisements using competitor 1-800 Contacts' federally registered 1800 CONTACTS trademark as a keyword.

In August 2016, the Federal Trade Commission filed an administrative complaint against 1-800 Contacts alleging, among other things, that its trademark enforcement practices in the Search engine marketing space have unreasonably restrained competition in violation of the FTC Act.

1-800 Contacts has denied all wrongdoing and appeared before an FTC administrative law judge in April 2017.[30] AdWords is recognized as a web-based advertising utensil since it adopts keywords that can deliver adverts explicitly to web users looking for information in respect to a certain product or service.

It is flexible and provides customizable options like Ad Extensions, access to non-search sites, leveraging the display network to help increase brand awareness.

The project hinges on cost per click (CPC) pricing where the maximum cost per day for the campaign can be chosen, thus the payment of the service only applies if the advert has been clicked.

SEM companies have embarked on AdWords projects as a way to publicize their SEM and SEO services.

One of the most successful approaches to the strategy of this project was to focus on making sure that PPC advertising funds were prudently invested.

Moreover, SEM companies have described AdWords as a practical tool for increasing a consumer’s investment earnings on Internet advertising.

The use of conversion tracking and Google Analytics tools was deemed to be practical for presenting to clients the performance of their canvas from click to conversion.

AdWords project has enabled SEM companies to train their clients on the utensil and delivers better performance to the canvass.

The assistance of AdWord canvass could contribute to the growth of web traffic for a number of its consumer’s websites, by as much as 250% in only nine months.[31] Another way Search engine marketing is managed is by contextual advertising.

Here marketers place ads on other sites or portals that carry information relevant to their products so that the ads jump into the circle of vision of browsers who are seeking information from those sites.

A successful SEM plan is the approach to capture the relationships amongst information searchers, businesses, and search engines.

Search engines were not important to some industries in the past, but over the past years the use of search engines for accessing information has become vital to increase business opportunities.[32] The use of SEM strategic tools for businesses such as tourism can attract potential consumers to view their products, but it could also pose various challenges.[33] These challenges could be the competition that companies face amongst their industry and other sources of information that could draw the attention of online consumers.[32] To assist the combat of challenges, the main objective for businesses applying SEM is to improve and maintain their ranking as high as possible on SERPs so that they can gain visibility.

Therefore, search engines are adjusting and developing algorithms and the shifting criteria by which web pages are ranked sequentially to combat against search engine misuse and spamming, and to supply the most relevant information to searchers.[32] This could enhance the relationship amongst information searchers, businesses, and search engines by understanding the strategies of marketing to attract business.

Web interoperability

Web interoperability is producing web pages viewable with nearly every device and browser.

The term was first used in the Web interoperability Pledge,[1] which is a promise to adhere to current HTML recommendations as promoted by the World Wide Web Consortium (W3C).

The WIP was not a W3C initiative.

but it was started by and has been run by ZDNet AnchorDesk.

This issue was known as "cross browsing" in the browser war between Internet Explorer and Netscape.

Microsoft's Internet Explorer was the dominant browser after that, but modern web browsers such as Mozilla Firefox, Opera and Safari have become dominant, and support additional web standards beyond what Internet Explorer supports.

Because of Internet Explorer's backwards compatibility, some web pages have continued to use non-standard HTML tags, DOM handling scripts, and platform-specific technologies such as ActiveX, which could potentially be harmful for Web accessibility and device independence.

There have been various projects to improve Web interoperability, for example the Web Standards Project, Mozilla's Technology Evangelism[2] and Web Standards Group,[3] and the Web Essential Conference.[4]

Wahl, Luxembourg

Wahl (Luxembourgish: Wal) is a commune and small village in western Luxembourg, in the canton of Redange.

Wahl, which lies in the south of the commune, has a population of 220.[1] Other villages within the commune include Buschrodt and Grevels.

The commune administration for the three villages is located in Wahl's historic school building.

In addition to the commune administration, Wahl has a church and cemetery, but no local stores or restaurants.

The local industry mainly comprises farming, with a few exceptions, like Topsolar (solar- & other alternative energies), Supernova Cult (fashion) and WebSEO (web design), who are the three main non-agriculture related businesses based in Wahl.

Wahl is also well known for a less "real" person.

According to old local myths and legends, the village was also home to the fabled werewolf of Wahl, a man cursed after a dispute with the local minister.[2]

Microsoft Expression Web

Microsoft Expression Web is an HTML editor and general web design software product by Microsoft.

It was discontinued on December 20, 2012 and subsequently made available free of charge from Microsoft.

It was a component of the also discontinued Expression Studio.

Expression Web can design and develop web pages using HTML5, CSS 3, ASP.NET, PHP, JavaScript, XML+XSLT and XHTML.

Expression Web 4 requires .NET Framework 4.0 and Silverlight 4.0 to install and run.[1] Expression Web uses its own standards-based rendering engine which is different from Internet Explorer's Trident engine.[4] On May 14, 2006, Microsoft released the first Community Technology Preview (CTP) version of Expression Web, code-named Quartz.

On September 5, 2006, Microsoft released Beta 1.

Beta 1 removed most of the FrontPage-proprietary (non-standard) features such as bots (use of FPSE features for server-side scripting), parts, functions, themes, automatic generation of navigation buttons, FrontPage forms, navigation pane to build a web site's hierarchy, and other non-standard features available in CTP 1.

The Release To Manufacturing version was made available on December 4, 2006.

The first and the only service pack was published in December 2007.[5] Expression Web does not have the form validation controls for HTML fields like FrontPage, but supports validator controls for ASP.NET.[6] Microsoft Expression Web 2 was released in 2008.[7] Expression Web 2 offers native support for PHP and Silverlight.

No service packs have been released for version 2.

Microsoft Expression Web 3 was released in 2009.[8] Until version 2, Expression Web was the only application in the Expression Studio suite based on Microsoft Office code and dependencies.[9] With version 3, Expression Web was rewritten in Windows Presentation Foundation, in line with the rest of the Expression Suite, without Microsoft Office dependencies.

A result of this was features like customizable toolbars and menus, standard Windows color scheme, spell check, DLL addins, file menu export feature, drag-and-drop between remote sites, comparing sites by timestamp, automatic language tagging, basic macro support were removed in this version.[9][10][11][12][13][14][15] Other features like Undo do not work reliably.[16][17] Version 3 introduced Expression Web 3 SuperPreview tool for comparing and rendering webpage in various browsers.

Also noted was the lack of support for root relative links, links that start with a "/" to refer to the root of a web server.

This feature was added with Expression Web 3 Service Pack 1, released in November 2009.[18] Service Pack 2 for Expression Web 3 was released in April 2010.[19] Servive Pack 3 for Expression Web 3 was released in October 2011 and includes general product, stability, performance, and security fixes.[20] Microsoft Expression Web 4 was released on June 7, 2010.[21] It added the option of HTML add-ins, and access to a web-based SuperPreview functionality, for testing pages on browsers that cannot be installed on the user's system (such as Mac OS X or Linux browsers).

Microsoft Expression Web 4 also provides an SEO Checker which analyzes produced web site against the best practices for getting the highest possible search-engine rankings.[22] Version 4 does not bring back all the features removed in Version 3.[23] Expression Web 4 Service Pack 1 was released in March 2011 and added support for IntelliSense for the HTML5 and CSS3 draft specifications in the Code editor, HTML5 and CSS3 support in the CSS Properties palette, selected CSS3 properties in the Style dialogs, semantic HTML5 tags in Design View and new PHP 5.3 functions.[24][25] Expression Web 4 SP2 was released in July 2011, and fixed a number of issues and introduced new features such as jQuery IntelliSense support, a panel for managing snippets, Interactive Snapshot Panel, comment/uncomment functionality in Code View, and workspace and toolbar customization.[26] As of December, 2012, Microsoft has announced that Expression Studio will no longer be a stand-alone product.[27] Expression Blend is being integrated into Visual Studio, while Expression Web and Expression Design will now be free products.

Technical support is available for customers who purchased Expression Web or Expression Design following their published support lifetime guides, while no support will be offered to free downloaders.

No new versions of Expression Web or Design are planned.[28] Microsoft Expression Web received positive reviews.

PC Pro awarded Expression Web 2 five stars out of six.

"It largely succeeded by concentrating on providing standards-compliant support for the web's core markup languages, (X)HTML and CSS," Tom Arah concluded.[29] PC Magazine also rated Expression Web 2 with 4 stars out of 5 and labeled it as a more cost-effective option compared to the main competitor, Adobe Dreamweaver.

"Even if money is no object, Expression Web 2 might be your better choice," editor Edward Mendelson wrote.[30] However, PC Magazine criticized a lack of "Secure FTP in its Web-publishing functions" and "the ability to create browser-based (as opposed to server-based) scripting of dynamic pages that works in all browsers, including Safari".

On the other hand, PC Magazine noted that "most designers won't care about their absence".[30] However, Microsoft Expression 3 later added support for SSH File Transfer Protocol (SFTP) (otherwise known as Secure FTP) as well as FTP over SSL (FTPS).[31] Expression Web 4, like the previous versions, also received positive reviews[32] with PC Magazine calling it an "efficient website editor with full support for current standards," and praising its "clear interface" and "flexible preview functions."[33]

News Ghana

News Ghana is a Ghanaian independent news portal.

Its one of the most visited in Ghana, with 200,000 average daily visits.[1][2] News Ghana formerly known as Spy Ghana started in 2010 as a news blog by Madam Mushura Don-Baare and became a News Portal in January 2013.

Spy Ghana also made world headlines for the wrong reasons in 2012.[1][3][4][5][6] News Ghana is managed by Roger A.

Agana, who is co-founder, former general manager and editor of Modernghana.com.

The parent company of News Ghana is M'ideas Group with operations throughout Africa specializing in news reporting, public relations, advertisement, product review, search engine optimization (SEO), domain registration,[7] web hosting and designing.[8][9] According to statistics from web traffic analytics company Alexa Internet,[10] News Ghana has a market share of 53.3% from Ghana, 17.2% from India, 14.1% from Nigeria, 2.7% from USA, 1.3% from Germany.[11]

Artist%27s portfolio

An artist's portfolio is an edited collection of their best artwork intended to showcase an artist's style or method of work.

A portfolio is used by artists to show employers their versatility by showing different samples of current work.

Typically, the work reflects an artist's best work or a depth in one specific area of work.

Historically, portfolios were printed out and placed into a book.

With the increased use of the internet and email, however, there are now websites that host online portfolios that are available to a wider audience.

Sometimes an artist's portfolio can be referred to as a lookbook.

A photography portfolio can focus on a single subject.

It can be a collection of photographs taken with a certain type of camera, in one geographic area, of one person or a group of people, only black & white or sepia photos, a special event, etc.

Many photographers use portfolios to show their best work when looking for jobs in the photography industry.

For example, wedding photographers may put together a book of their best wedding photos to show to engaged couples who are looking for a wedding photographer.

Photojournalists may take a collection of their best freelance work with them when looking for a job.

An artist design book is a collection of photographs meant to show off a model, photographer, style, or clothing line.

Sometimes they are made to compile the looks of other people such as a celebrity, politician or socialite.

This is an especially popular term with fashion bloggers.

Artist design books, or ADBs, in their online form, can be described as "fashion diaries" because bloggers are constantly updating them on a daily or weekly basis.

It is common for stores or clothing designers to use an ADB to show off products.[1] They may include photos of multiple types of clothes, shoes and other accessories from a season or line.

A web designer portfolio depicts web projects made by the designer.

This portfolio is usually made as a website and it shows a front end part of the websites made by the web designer as well as the entire web project made from web designer's website wireframe.

Common elements included in web designer portfolio are[2]:

Single-page application

A Single-page application (SPA) is a web application or website that interacts with the web browser by dynamically rewriting the current web page with new data from the web server, instead of the default method of the browser loading entire new pages.

The goal is faster transitions that make the website feel more like a native app.

In a SPA, all necessary HTML, JavaScript, and CSS code is either retrieved by the browser with a single page load,[1] or the appropriate resources are dynamically loaded and added to the page as necessary, usually in response to user actions.

The page does not reload at any point in the process, nor does it transfer control to another page, although the location hash or the HTML5 History API can be used to provide the perception and navigability of separate logical pages in the application.[2] The origins of the term Single-page application are unclear, though the concept was discussed at least as early as 2003.[3] Stuart Morris, a programming student at Cardiff University, Wales wrote the Self-Contained website at slashdotslash.com with the same goals and functions in April 2002,[4] and later the same year Lucas Birdeau, Kevin Hakman, Michael Peachey and Clifford Yeh described a Single-page application implementation in US patent 8,136,109.[5] JavaScript can be used in a web browser to display the user interface (UI), run application logic, and communicate with a web server.

Mature open-source libraries are available that support the building of a SPA, reducing the amount of JavaScript code developers have to write.

There are various techniques available that enable the browser to retain a single page even when the application requires server communication.

Web browser JavaScript frameworks and libraries, such as AngularJS, Ember.js, ExtJS, Knockout.js, Meteor.js, React and Vue.js have adopted SPA principles.

The most prominent technique currently being used is Ajax.[1] Predominantly using the XMLHttpRequest/ActiveX Object(deprecated) object from JavaScript, other Ajax approaches include using IFRAME or script HTML elements.

Popular libraries like jQuery, which normalize Ajax behavior across browsers from different manufacturers, have further popularized the Ajax technique.

WebSockets are a bidirectional real-time client-server communication technology that are part of the HTML5 specification.

Their use is superior to Ajax in terms of performance[10] and simplicity.

Server-sent events (SSEs) is a technique whereby servers can initiate data transmission to browser clients.

Once an initial connection has been established, an event stream remains open until closed by the client.

SSEs are sent over traditional HTTP and have a variety of features that WebSockets lack by design such as automatic reconnection, event IDs, and the ability to send arbitrary events.[11] Although this method is outdated, asynchronous calls to the server may also be achieved using browser plug-in technologies such as Silverlight, Flash, or Java applets.

Requests to the server typically result in either raw data (e.g., XML or JSON), or new HTML being returned.

In the case where HTML is returned by the server, JavaScript on the client updates a partial area of the DOM (Document Object Model).[12] When raw data is returned, often a client-side JavaScript XML / (XSL) process (and in the case of JSON a template) is used to translate the raw data into HTML, which is then used to update a partial area of the DOM.

A SPA moves logic from the server to the client, with the role of the web server evolving into a pure data API or web service.

This architectural shift has, in some circles, been coined "Thin Server Architecture" to highlight that complexity has been moved from the server to the client, with the argument that this ultimately reduces overall complexity of the system.

The server keeps the necessary state in memory of the client state of the page.

In this way, when any request hits the server (usually user actions), the server sends the appropriate HTML and/or JavaScript with the concrete changes to bring the client to the new desired state (usually adding/deleting/updating a part of the client DOM).

At the same time, the state in server is updated.

Most of the logic is executed on the server, and HTML is usually also rendered on the server.

In some ways, the server simulates a web browser, receiving events and performing delta changes in server state which are automatically propagated to client.

This approach needs more server memory and server processing, but the advantage is a simplified development model because a) the application is usually fully coded in the server, and b) data and UI state in the server are shared in the same memory space with no need for custom client/server communication bridges.

This is a variant of the stateful server approach.

The client page sends data representing its current state to the server, usually through Ajax requests.

Using this data, the server is able to reconstruct the client state of the part of the page which needs to be modified and can generate the necessary data or code (for instance, as JSON or JavaScript), which is returned to the client to bring it to a new state, usually modifying the page DOM tree according to the client action that motivated the request.

This approach requires that more data be sent to the server and may require more computational resources per request to partially or fully reconstruct the client page state in the server.

At the same time, this approach is more easily scalable because there is no per-client page data kept in the server and, therefore, Ajax requests can be dispatched to different server nodes with no need for session data sharing or server affinity.

Some SPAs may be executed from a local file using the file URI scheme.

This gives users the ability to download the SPA from a server and run the file from a local storage device, without depending on server connectivity.

If such a SPA wants to store and update data, it must use browser-based Web Storage.

These applications benefit from advances available with HTML5.[13] Because the SPA is an evolution away from the stateless page-redraw model that browsers were originally designed for, some new challenges have emerged.

Each of these problems has an effective solution[14] with: Because of the lack of JavaScript execution on crawlers of some popular Web search engines,[19] SEO (Search engine optimization) has historically presented a problem for public facing websites wishing to adopt the SPA model.[20] Between 2009 and 2015, Google Webmaster Central proposed and then recommended an "AJAX crawling scheme"[21][22] using an initial exclamation mark in fragment identifiers for stateful AJAX pages (#!).

Special behavior must be implemented by the SPA site to allow extraction of relevant metadata by the search engine's crawler.

For search engines that do not support this URL hash scheme, the hashed URLs of the SPA remain invisible.

These "hash-bang" URIs have been considered problematic by a number of writers including Jeni Tennison at the W3C because they make pages inaccessible to those who do not have JavaScript activated in their browser.

They also break HTTP referer headers as browsers are not allowed to send the fragment identifier in the Referer header.[23] In 2015, Google deprecated their hash-bang AJAX crawling proposal.[24] Alternatively, applications may render the first page load on the server and subsequent page updates on the client.

This is traditionally difficult, because the rendering code might need to be written in a different language or framework on the server and in the client.

Using logic-less templates, cross-compiling from one language to another, or using the same language on the server and the client may help to increase the amount of code that can be shared.

Because SEO compatibility is not trivial in SPAs, it is worth noting that SPAs are commonly not used in a context where search engine indexing is either a requirement, or desirable.

Use cases include applications that surface private data hidden behind an authentication system.

In the cases where these applications are consumer products, often a classic "page redraw" model is used for the applications landing page and marketing site, which provides enough meta data for the application to appear as a hit in a search engine query.

Blogs, support forums, and other traditional page redraw artifacts often sit around the SPA that can seed search engines with relevant terms.

Another approach used by server-centric web frameworks like the Java-based ItsNat is to render any hypertext on the server using the same language and templating technology.

In this approach, the server knows with precision the DOM state on the client, any big or small page update required is generated in the server, and transported by Ajax, the exact JavaScript code to bring the client page to the new state executing DOM methods.

Developers can decide which page states must be crawlable by web spiders for SEO and be able to generate the required state at load time generating plain HTML instead of JavaScript.

In the case of the ItsNat framework, this is automatic because ItsNat keeps the client DOM tree in the server as a Java W3C DOM tree; rendering of this DOM tree in the server generates plain HTML at load time and JavaScript DOM actions for Ajax requests.

This duality is very important for SEO because developers can build with the same Java code and pure HTML-based templating the desired DOM state in server; at page load time, conventional HTML is generated by ItsNat making this DOM state SEO-compatible.

As of version 1.3,[25] ItsNat provides a new stateless mode, and the client DOM is not kept on the server because, with the stateless mode client, DOM state is partially or fully reconstructed on the server when processing any Ajax request based on required data sent by the client informing the server of the current DOM state; the stateless mode may be also SEO-compatible because SEO compatibility happens at load time of the initial page unaffected by stateful or stateless modes.

There are a couple of workarounds to make it look as though the web site is crawlable.

Both involve creating separate HTML pages that mirror the content of the SPA.

The server could create an HTML-based version of the site and deliver that to crawlers, or it's possible to use a headless browser such as PhantomJS to run the JavaScript application and output the resulting HTML.

Both of these do require quite a bit of effort, and can end up giving a maintenance headache for the large complex sites.

There are also potential SEO pitfalls.

If server-generated HTML is deemed to be too different from the SPA content, then the site will be penalized.

Running PhantomJS to output the HTML can slow down the response speed of the pages, which is something for which search engines – Google in particular – downgrade the rankings.[26] One way to increase the amount of code that can be shared between servers and clients is to use a logic-less template language like Mustache or Handlebars.

Such templates can be rendered from different host languages, such as Ruby on the server and JavaScript in the client.

However, merely sharing templates typically requires duplication of business logic used to choose the correct templates and populate them with data.

Rendering from templates may have negative performance effects when only updating a small portion of the page—such as the value of a text input within a large template.

Replacing an entire template might also disturb a user's selection or cursor position, where updating only the changed value might not.

To avoid these problems, applications can use UI data bindings or granular DOM manipulation to only update the appropriate parts of the page instead of re-rendering entire templates.

With a SPA being, by definition, "a single page", the model breaks the browser's design for page history navigation using the Forward/Back buttons.

This presents a usability impediment when a user presses the back button, expecting the previous screen state within the SPA, but instead, the application's single page unloads and the previous page in the browser's history is presented.

The traditional solution for SPAs has been to change the browser URL's hash fragment identifier in accord with the current screen state.

This can be achieved with JavaScript, and causes URL history events to be built up within the browser.

As long as the SPA is capable of resurrecting the same screen state from information contained within the URL hash, the expected back-button behavior is retained.

To further address this issue, the HTML5 specification has introduced pushState and replaceState providing programmatic access to the actual URL and browser history.

Analytics tools such as Google Analytics rely heavily upon entire new pages loading in the browser, initiated by a new page load.

SPAs do not work this way.

After the first page load, all subsequent page and content changes are handled internally by the application, which should simply call a function to update the analytics package.

Failing to call said function, the browser never triggers a new page load, nothing gets added to the browser history, and the analytics package has no idea who is doing what on the site.

It is possible to add page load events to a SPA using the HTML5 history API; this will help integrate analytics.

The difficulty comes in managing this and ensuring that everything is being tracked accurately – this involves checking for missing reports and double entries.

Some frameworks provide open source analytics integrations addressing most of the major analytics providers.

Developers can integrate them into the application and make sure that everything is working correctly, but there is no need to do everything from scratch.[26] Single-page applications have a slower first page load than server-based applications.

This is because the first load has to bring down the framework and the application code before rendering the required view as HTML in the browser.

A server-based application just has to push out the required HTML to the browser, reducing the latency and download time.

There are some ways of speeding up the initial load of a SPA, such as a heavy approach to caching and lazy-loading modules when needed.

But it's not possible to get away from the fact that it needs to download the framework, at least some of the application code, and will most likely hit an API for data before displaying something in the browser.[26] This is a "pay me now, or pay me later" trade-off scenario.

The question of performance and wait-times remains a decision that the developer must make.

A SPA is fully loaded in the initial page load and then page regions are replaced or updated with new page fragments loaded from the server on demand.

To avoid excessive downloading of unused features, a SPA will often progressively download more features as they become required, either small fragments of the page, or complete screen modules.

In this way an analogy exists between "states" in a SPA and "pages" in a traditional website.

Because "state navigation" in the same page is analogous to page navigation, in theory, any page-based web site could be converted to single-page replacing in the same page only the changed parts.

The SPA approach on the web is similar to the single-document interface (SDI) presentation technique popular in native desktop applications.

Google Search

Google Search, also referred to as Google Web Search or simply Google, is a web search engine developed by Google.

It is the most used search engine on the World Wide Web across all platforms, with 92.62% market share as of June 2019,[4] handling more than 5.4 billion searches each day.[5] The order of search results returned by Google is based, in part, on a priority rank system called "PageRank".

Google Search also provides many different options for customized search, using symbols to include, exclude, specify or require certain search behavior, and offers specialized interactive experiences, such as flight status and package tracking, weather forecasts, currency, unit and time conversions, word definitions, and more.

The main purpose of Google Search is to search for text in publicly accessible documents offered by web servers, as opposed to other data, such as images or data contained in databases.

It was originally developed in 1997 by Larry Page, Sergey Brin, and Scott Hassan.[6][7][8] In June 2011, Google introduced "Google Voice Search" to search for spoken, rather than typed, words.[9] In May 2012, Google introduced a Knowledge Graph semantic search feature in the U.S.

Analysis of the frequency of search terms may indicate economic, social and health trends.[10] Data about the frequency of use of search terms on Google can be openly inquired via Google Trends and have been shown to correlate with flu outbreaks and unemployment levels, and provide the information faster than traditional reporting methods and surveys.

As of mid-2016, Google's search engine has begun to rely on deep neural networks.[11] Competitors of Google include Baidu and Soso.com in China; Naver.com and Daum.net in South Korea; Yandex in Russia; Seznam.cz in the Czech Republic; Qwant in France;[12] Yahoo in Japan, Taiwan and the US, as well as Bing and DuckDuckGo.[13] Some smaller search engines offer facilities not available with Google, e.g.

not storing any private or tracking information.

Within the U.S., as of July 2018, Bing handled 24.2 percent of all search queries.

During the same period of time, Oath (formerly known as Yahoo) had a search market share of 11.5 percent.

Market leader Google generated 63.2 percent of all core search queries in the U.S.[14] Google indexes hundreds of terabytes of information from web pages.[15] For websites that are currently down or otherwise not available, Google provides links to cached versions of the site, formed by the search engine's latest indexing of that page.[16] Additionally, Google indexes some file types, being able to show users PDFs, Word documents, Excel spreadsheets, PowerPoint presentations, certain Flash multimedia content, and plain text files.[17] Users can also activate "SafeSearch", a filtering technology aimed at preventing explicit and pornographic content from appearing in search results.[18] Despite Google Search's immense index, sources generally assume that Google is only indexing less than 5% of the total Internet, with the rest belonging to the deep web, inaccessible through its search tools.[15][19][20] In 2012, Google changed its search indexing tools to demote sites that had been accused of piracy.[21] In October 2016, Gary Illyes, a webmaster trends analyst with Google, announced that the search engine would be making a separate, primary web index dedicated for mobile devices, with a secondary, less up-to-date index for desktop use.

The change was a response to the continued growth in mobile usage, and a push for web developers to adopt a mobile-friendly version of their websites.[22][23] In December 2017, Google began rolling out the change, having already done so for multiple websites.[24] In August 2009, Google invited web developers to test a new search architecture, codenamed "Caffeine", and give their feedback.

The new architecture provided no visual differences in the user interface, but added significant speed improvements and a new "under-the-hood" indexing infrastructure.

The move was interpreted in some quarters as a response to Microsoft's recent release of an upgraded version of its own search service, renamed Bing, as well as the launch of Wolfram Alpha, a new search engine based on "computational knowledge".[25][26] Google announced completion of "Caffeine" on June 8, 2010, claiming 50% fresher results due to continuous updating of its index.[27] With "Caffeine", Google moved its back-end indexing system away from MapReduce and onto Bigtable, the company's distributed database platform.[28][29] In August 2018, Danny Sullivan from Google announced a broad core algorithm update.

As per current analysis done by the industry leaders Search Engine Watch and Search Engine Land, the update was to drop down the medical and health related websites that were not user friendly and were not providing good user experience.

This is why the industry experts named it "Medic".[30] Google reserves very high standards for YMYL (Your Money or Your Life) pages.

This is because misinformation can affect users financially, physically or emotionally.

Therefore, the update targeted particularly those YMYL pages that have low-quality content and misinformation.

This resulted in the algorithm targeting health and medical related websites more than others.

However, many other websites from other industries were also negatively affected.[31] Google Search consists of a series of localized websites.

The largest of those, the google.com site, is the top most-visited website in the world.[32] Some of its features include a definition link for most searches including dictionary words, the number of results you got on your search, links to other searches (e.g.

for words that Google believes to be misspelled, it provides a link to the search results using its proposed spelling), and many more.

Google Search accepts queries as normal text, as well as individual keywords.[33] It automatically corrects misspelled words, and yields the same results regardless of capitalization.[33] For more customized results, one can use a wide variety of operators, including, but not limited to:[34][35] Google applies query expansion to submitted search queries, using techniques to deliver results that it considers "smarter" than the query users actually submitted.

This technique involves several steps, including:[36] In 2008, Google started to give users autocompleted search suggestions in a list below the search bar while typing.[37] Google's homepage includes a button labeled "I'm Feeling Lucky".

This feature originally allowed users to type in their search query, click the button and be taken directly to the first result, bypassing the search results page.

With the 2010 announcement of Google Instant, an automatic feature that immediately displays relevant results as users are typing in their query, the "I'm Feeling Lucky" button disappears, requiring that users opt-out of Instant results through search settings in order to keep using the "I'm Feeling Lucky" functionality.[38] In 2012, "I'm Feeling Lucky" was changed to serve as an advertisement for Google services; users hover their computer mouse over the button, it spins and shows an emotion ("I'm Feeling Puzzled" or "I'm Feeling Trendy", for instance), and, when clicked, takes users to a Google service related to that emotion.[39] Tom Chavez of "Rapt", a firm helping to determine a website's advertising worth, estimated in 2007 that Google lost $110 million in revenue per year due to use of the button, which bypasses the advertisements found on the search results page.[40] Besides the main text-based search-engine features of Google Search, it also offers multiple quick, interactive experiences.

These include, but are not limited to:[41][42][43] During Google's developer conference, Google I/O, in May 2013, the company announced that, on Google Chrome and Chrome OS, users would be able to say "OK Google", with the browser initiating an audio-based search, with no button presses required.

After having the answer presented, users can follow up with additional, contextual questions; an example include initially asking "OK Google, will it be sunny in Santa Cruz this weekend?", hearing a spoken answer, and reply with "how far is it from here?"[44][45] An update to the Chrome browser with voice-search functionality rolled out a week later, though it required a button press on a microphone icon rather than "OK Google" voice activation.[46] Google released a browser extension for the Chrome browser, named with a "beta" tag for unfinished development, shortly thereafter.[47] In May 2014, the company officially added "OK Google" into the browser itself;[48] they removed it in October 2015, citing low usage, though the microphone icon for activation remained available.[49] In May 2016, 20% of search queries on mobile devices were done through voice.[50] "Universal search" was launched by Google on May 16, 2007 as an idea that merged the results from different kinds of search types into one.

Prior to Universal search, a standard Google Search would consist of links only to websites.

Universal search, however, incorporates a wide variety of sources, including websites, news, pictures, maps, blogs, videos, and more, all shown on the same search results page.[51][52] Marissa Mayer, then-vice president of search products and user experience, described the goal of Universal search as "we're attempting to break down the walls that traditionally separated our various search properties and integrate the vast amounts of information available into one simple set of search results.[53] In June 2017, Google expanded its search results to cover available job listings.

The data is aggregated from various major job boards and collected by analyzing company homepages.

Initially only available in English, the feature aims to simplify finding jobs suitable for each user.[54][55] In May 2009, Google announced that they would be parsing website microformats in order to populate search result pages with "Rich snippets".

Such snippets include additional details about results, such as displaying reviews for restaurants and social media accounts for individuals.[56] In May 2016, Google expanded on the "Rich snippets" format to offer "Rich cards", which, similarly to snippets, display more information about results, but shows them at the top of the mobile website in a swipeable carousel-like format.[57] Originally limited to movie and recipe websites in the United States only, the feature expanded to all countries globally in 2017.[58] Now the web publishers can have greater control over the rich snippets.

Preview settings from these meta tags will become effective in mid-to-late October 2019 and may take about a week for the global rollout to complete.[59] The Knowledge Graph is a knowledge base used by Google to enhance its search engine's results with information gathered from a variety of sources.[60] This information is presented to users in a box to the right of search results.[61] Knowledge Graph boxes were added to Google's search engine in May 2012,[60] starting in the United States, with international expansion by the end of the year.[62] The information covered by the Knowledge Graph grew significantly after launch, tripling its original size within seven months,[63] and being able to answer "roughly one-third" of the 100 billion monthly searches Google processed in May 2016.[64] The information is often used as a spoken answer in Google Assistant[65] and Google Home searches.[66] The Knowledge Graph has been criticized for providing answers without source attribution.[64] In May 2017, Google enabled a new "Personal" tab in Google Search, letting users search for content in their Google accounts' various services, including email messages from Gmail and photos from Google Photos.[67][68] The Google feed is a personalized stream of articles, videos, and other news-related content.

The feed contains a "mix of cards" which show topics of interest based on users' interactions with Google, or topics they choose to follow directly.[69] Cards include, "links to news stories, YouTube videos, sports scores, recipes, and other content based on what [Google] determined you're most likely to be interested in at that particular moment."[69] Users can also tell Google they're not interested in certain topics to avoid seeing future updates.

The Google feed launched in December 2016[70] and received a major update in July 2017.[71] As of May 2018, the Google feed can be found on the Google app and by swiping left on the home screen of certain Android devices.

As of 2019, Google will not allow political campaigns worldwide to target their advertisement to people to make them vote.[72] Google's rise was largely due to a patented algorithm called PageRank which helps rank web pages that match a given search string.[73] When Google was a Stanford research project, it was nicknamed BackRub because the technology checks backlinks to determine a site's importance.

Other keyword-based methods to rank search results, used by many search engines that were once more popular than Google, would check how often the search terms occurred in a page, or how strongly associated the search terms were within each resulting page.

The PageRank algorithm instead analyzes human-generated links assuming that web pages linked from many important pages are also important.

The algorithm computes a recursive score for pages, based on the weighted sum of other pages linking to them.

PageRank is thought to correlate well with human concepts of importance.

In addition to PageRank, Google, over the years, has added many other secret criteria for determining the ranking of resulting pages.

This is reported to comprise over 250 different indicators,[74][75] the specifics of which are kept secret to avoid difficulties created by scammers and help Google maintain an edge over its competitors globally.

PageRank was influenced by a similar page-ranking and site-scoring algorithm earlier used for RankDex, developed by Robin Li in 1996.

Larry Page's patent for PageRank filed in 1998 includes a citation to Li's earlier patent.

Li later went on to create the Chinese search engine Baidu in 2000.[76][77][78] In a potential hint of Google's future direction of their Search algorithm, Google's then chief executive Eric Schmidt, said in a 2007 interview with the Financial Times: "The goal is to enable Google users to be able to ask the question such as 'What shall I do tomorrow?' and 'What job shall I take?'".[79] Schmidt reaffirmed this during a 2010 interview with the Wall Street Journal: "I actually think most people don't want Google to answer their questions, they want Google to tell them what they should be doing next."[80] In 2013 the European Commission found that Google Search favored Google's own products, instead of the best result for consumers' needs.[81] In February 2015 Google announced a major change to its mobile search algorithm which would favor mobile friendly over other websites.

Nearly 60% of Google Searches come from mobile phones.

Google says it wants users to have access to premium quality websites.

Those websites which lack a mobile friendly interface would be ranked lower and it is expected that this update will cause a shake-up of ranks.

Businesses who fail to update their websites accordingly could see a dip in their regular websites traffic.[82] Because Google is the most popular search engine, many webmasters attempt to influence their website's Google rankings.

An industry of consultants has arisen to help websites increase their rankings on Google and on other search engines.

This field, called search engine optimization, attempts to discern patterns in search engine listings, and then develop a methodology for improving rankings to draw more searchers to their clients' sites.

Search engine optimization encompasses both "on page" factors (like body copy, title elements, H1 heading elements and image alt attribute values) and Off Page Optimization factors (like anchor text and PageRank).

The general idea is to affect Google's relevance algorithm by incorporating the keywords being targeted in various places "on page", in particular the title element and the body copy (note: the higher up in the page, presumably the better its keyword prominence and thus the ranking).

Too many occurrences of the keyword, however, cause the page to look suspect to Google's spam checking algorithms.

Google has published guidelines for website owners who would like to raise their rankings when using legitimate optimization consultants.[83] It has been hypothesized, and, allegedly, is the opinion of the owner of one business about which there have been numerous complaints, that negative publicity, for example, numerous consumer complaints, may serve as well to elevate page rank on Google Search as favorable comments.[84] The particular problem addressed in The New York Times article, which involved DecorMyEyes, was addressed shortly thereafter by an undisclosed fix in the Google algorithm.

According to Google, it was not the frequently published consumer complaints about DecorMyEyes which resulted in the high ranking but mentions on news websites of events which affected the firm such as legal actions against it.

Google Search Console helps to check for websites that use duplicate or copyright content.[85] In 2013, Google significantly upgraded its search algorithm with "Hummingbird".

Its name was derived from the speed and accuracy of the hummingbird.[86] The change was announced on September 26, 2013, having already been in use for a month.[87] "Hummingbird" places greater emphasis on natural language queries, considering context and meaning over individual keywords.[86] It also looks deeper at content on individual pages of a website, with improved ability to lead users directly to the most appropriate page rather than just a website's homepage.[88] The upgrade marked the most significant change to Google Search in years, with more "human" search interactions[89] and a much heavier focus on conversation and meaning.[86] Thus, web developers and writers were encouraged to optimize their sites with natural writing rather than forced keywords, and make effective use of technical web development for on-site navigation.[90] On certain occasions, the logo on Google's webpage will change to a special version, known as a "Google Doodle".

This is a picture, drawing, animation or interactive game that includes the logo.

It is usually done for a special event or day although not all of them are well known.[91] Clicking on the Doodle links to a string of Google Search results about the topic.

The first was a reference to the Burning Man Festival in 1998,[92][93] and others have been produced for the birthdays of notable people like Albert Einstein, historical events like the interlocking Lego block's 50th anniversary and holidays like Valentine's Day.[94] Some Google Doodles have interactivity beyond a simple search, such as the famous "Google Pacman" version that appeared on May 21, 2010.

Google offers a "Google Search" mobile app for Android and iOS devices.[95] The mobile apps exclusively feature a "feed", a news feed-style page of continually-updated developments on news and topics of interest to individual users.

Android devices were introduced to a preview of the feed in December 2016,[96] while it was made official on both Android and iOS in July 2017.[97][98] In April 2016, Google updated its Search app on Android to feature "Trends"; search queries gaining popularity appeared in the autocomplete box along with normal query autocompletion.[99] The update received significant backlash, due to encouraging search queries unrelated to users' interests or intentions, prompting the company to issue an update with an opt-out option.[100] In September 2017, the Google Search app on iOS was updated to feature the same functionality.[101] Until May 2013, Google Search had offered a feature to translate search queries into other languages.

A Google spokesperson told Search Engine Land that "Removing features is always tough, but we do think very hard about each decision and its implications for our users.

Unfortunately, this feature never saw much pick up".[102] Instant search was announced in September 2010 as a feature that displayed suggested results while the user typed in their search query.

The primary advantage of the new system was its ability to save time, with Marissa Mayer, then-vice president of search products and user experience, proclaiming that the feature would save 2–5 seconds per search, elaborating that "That may not seem like a lot at first, but it adds up.

With Google Instant, we estimate that we'll save our users 11 hours with each passing second!"[103] Matt Van Wagner of Search Engine Land wrote that "Personally, I kind of like Google Instant and I think it represents a natural evolution in the way search works", and also praised Google's efforts in public relations, writing that "With just a press conference and a few well-placed interviews, Google has parlayed this relatively minor speed improvement into an attention-grabbing front-page news story".[104] The upgrade also became notable for the company switching Google Search's underlying technology from HTML to AJAX.[105] Instant Search could be disabled via Google's "preferences" menu for those who didn't want its functionality.[106] The publication 2600: The Hacker Quarterly compiled a list of words that Google Instant did not show suggested results for, with a Google spokesperson giving the following statement to Mashable:[107] There are a number of reasons you may not be seeing search queries for a particular topic.

Among other things, we apply a narrow set of removal policies for pornography, violence, and hate speech.

It's important to note that removing queries from Autocomplete is a hard problem, and not as simple as blacklisting particular terms and phrases.

In search, we get more than one billion searches each day.

Because of this, we take an algorithmic approach to removals, and just like our search algorithms, these are imperfect.

We will continue to work to improve our approach to removals in Autocomplete, and are listening carefully to feedback from our users.

Our algorithms look not only at specific words, but compound queries based on those words, and across all languages.

So, for example, if there's a bad word in Russian, we may remove a compound word including the transliteration of the Russian word into English.

We also look at the search results themselves for given queries.

So, for example, if the results for a particular query seem pornographic, our algorithms may remove that query from Autocomplete, even if the query itself wouldn't otherwise violate our policies.

This system is neither perfect nor instantaneous, and we will continue to work to make it better.PC Magazine discussed the inconsistency in how some forms of the same topic are allowed; for instance, "lesbian" was blocked, while "gay" was not, and "cocaine" was blocked, while "crack" and "heroin" were not.

The report further stated that seemingly normal words were also blocked due to pornographic innuendos, most notably "scat", likely due to having two completely separate contextual meanings, one for music and one for a sexual practice.[108] On July 26, 2017, Google removed Instant results, due to a growing number of searches on mobile devices, where interaction with search, as well as screen sizes, differ significantly from a computer.[109][110] Various search engines provide encrypted Web search facilities.

In May 2010 Google rolled out SSL-encrypted web search.[111] The encrypted search was accessed at encrypted.google.com[112] However, the web search is encrypted via Transport Layer Security (TLS) by default today, thus every search request should be automatically encrypted if TLS is supported by the web browser.[113] On its support website, Google announced that the address encrypted.google.com would be turned off April 30, 2018, stating that all Google products and most new browsers use HTTPS connections as the reason for the discontinuation.[114] Google Real-Time Search was a feature of Google Search in which search results also sometimes included real-time information from sources such as Twitter, Facebook, blogs, and news websites.[115] The feature was introduced on December 7, 2009[116] and went offline on July 2, 2011 after the deal with Twitter expired.[117] Real-Time Search included Facebook status updates beginning on February 24, 2010.[118] A feature similar to Real-Time Search was already available on Microsoft's Bing search engine, which showed results from Twitter and Facebook.[119] The interface for the engine showed a live, descending "river" of posts in the main region (which could be paused or resumed), while a bar chart metric of the frequency of posts containing a certain search term or hashtag was located on the right hand corner of the page above a list of most frequently reposted posts and outgoing links.

Hashtag search links were also supported, as were "promoted" tweets hosted by Twitter (located persistently on top of the river) and thumbnails of retweeted image or video links.

In January 2011, geolocation links of posts were made available alongside results in Real-Time Search.

In addition, posts containing syndicated or attached shortened links were made searchable by the link: query option.

In July 2011 Real-Time Search became inaccessible, with the Real-Time link in the Google sidebar disappearing and a custom 404 error page generated by Google returned at its former URL.

Google originally suggested that the interruption was temporary and related to the launch of Google+;[120] they subsequently announced that it was due to the expiry of a commercial arrangement with Twitter to provide access to tweets.[121] Searches made by search engines, including Google, leave traces.

This raises concerns about privacy.

In principle, if details of a user's searches are found, those with access to the information—principally state agencies responsible for law enforcement and similar matters—can make deductions about the user's activities.

This has been used for the detection and prosecution of lawbreakers; for example a murderer was found and convicted after searching for terms such as "tips with killing with a baseball bat".[122] A search may leave traces both on a computer used to make the search, and in records kept by the search provider.

When using a search engine through a browser program on a computer, search terms and other information may be stored on the computer by default, unless the browser is set not to do this, or they are erased.

Saved terms may be discovered on forensic analysis of the computer.

An Internet Service Provider (ISP) or search engine provider (e.g., Google) may store records which relate search terms to an IP address and a time.[123] Whether such logs are kept, and access to them by law enforcement agencies, is subject to legislation in different jurisdictions and working practices; the law may mandate, prohibit, or say nothing about logging of various types of information.

Some search engines, located in jurisdictions where it is not illegal, make a feature of not storing user search information.[124] The keywords suggested by the Autocomplete feature show a population of users' research which is made possible by an identity management system.

Volumes of personal data are collected via Eddystone web and proximity beacons.[citation needed] Google has been criticized for placing long-term cookies on users' machines to store these preferences, a tactic which also enables them to track a user's search terms and retain the data for more than a year.[125] Since 2012, Google Inc.

has globally introduced encrypted connections for most of its clients, in order to bypass governative blockings of the commercial and IT services.[126] In late June 2011, Google introduced a new look to the Google home page in order to boost the use of the Google+ social tools.[127] One of the major changes was replacing the classic navigation bar with a black one.

Google's digital creative director Chris Wiggins explains: "We're working on a project to bring you a new and improved Google experience, and over the next few months, you'll continue to see more updates to our look and feel."[128] The new navigation bar has been negatively received by a vocal minority.[129] In November 2013, Google started testing yellow labels for advertisements displayed in search results, to improve user experience.

The new labels, highlighted in yellow color, and aligned to the left of each sponsored link help users clearly differentiate between organic and sponsored results.[130] On December 15, 2016, Google rolled out a new desktop search interface that mimics their modular mobile user interface.

The mobile design consists of a tabular design that highlights search features in boxes.

and works by imitating the desktop Knowledge Graph real estate, which appears in the right-hand rail of the search engine result page, these featured elements frequently feature Twitter carousels, People Also Search For, and Top Stories (vertical and horizontal design) modules.

The Local Pack and Answer Box were two of the original features of the Google SERP that were primarily showcased in this manner, but this new layout creates a previously unseen level of design consistency for Google results.[131] In addition to its tool for searching web pages, Google also provides services for searching images, Usenet newsgroups, news websites, videos (Google Videos), searching by locality, maps, and items for sale online.

Google Videos allows searching the World Wide Web for video clips.[132] The service evolved from Google Video, Google's discontinued video hosting service that also allowed to search the web for video clips.[132] In 2012, Google has indexed over 30 trillion web pages, and received 100 billion queries per month.[133] It also caches much of the content that it indexes.

Google operates other tools and services including Google News, Google Shopping, Google Maps, Google Custom Search, Google Earth, Google Docs, Picasa, Panoramio, YouTube, Google Translate, Google Blog Search and Google Desktop Search.

There are also products available from Google that are not directly search-related.

Gmail, for example, is a webmail application, but still includes search features; Google Browser Sync does not offer any search facilities, although it aims to organize your browsing time.

Also Google starts many new beta products, like Google Social Search or Google Image Swirl.

In 2009, Google claimed that a search query requires altogether about 1 kJ or 0.0003 kW·h,[134] which is enough to raise the temperature of one liter of water by 0.24 °C.

According to green search engine Ecosia, the industry standard for search engines is estimated to be about 0.2 grams of CO2 emission per search.[135] Google's 40,000 searches per second translate to 8 kg CO2 per second or over 252 million kilos of CO2 per year.[136] In 2003, The New York Times complained about Google's indexing, claiming that Google's caching of content on its site infringed its copyright for the content.[137] In both Field v.

Google and Parker v.

Google, the United States District Court of Nevada ruled in favor of Google.[138][139] Google flags search results with the message "This site may harm your computer" if the site is known to install malicious software in the background or otherwise surreptitiously.

For approximately 40 minutes on January 31, 2009, all search results were mistakenly classified as malware and could therefore not be clicked; instead a warning message was displayed and the user was required to enter the requested URL manually.

The bug was caused by human error.[140][141][142][143] The URL of "/" (which expands to all URLs) was mistakenly added to the malware patterns file.[141][142] In 2007, a group of researchers observed a tendency for users to rely on Google Search exclusively for finding information, writing that "With the Google interface the user gets the impression that the search results imply a kind of totality.

...

In fact, one only sees a small part of what one could see if one also integrates other research tools."[144] In 2011, Google Search query results have been shown by Internet activist Eli Pariser to be tailored to users, effectively isolating users in what he defined as a filter bubble.

Pariser holds algorithms used in search engines such as Google Search responsible for catering "a personal ecosystem of information".[145] Although contrasting views have mitigated the potential threat of "informational dystopia" and questioned the scientific nature of Pariser's claims,[146] filter bubbles have been mentioned to account for the surprising results of the U.S.

presidential election in 2016 alongside fake news and echo chambers, suggesting that Facebook and Google have designed personalized online realities in which "we only see and hear what we like".[147] In 2012, the US Federal Trade Commission fined Google US$22.5 million for violating their agreement not to violate the privacy of users of Apple's Safari web browser.[148] The FTC was also continuing to investigate if Google's favoring of their own services in their search results violated antitrust regulations.[149] Google Search engine robots are programmed to use algorithms that understand and predict human behavior.

The book, Race After Technology: Abolitionist Tools for the New Jim Code[150] by Ruha Benjamin talks about human bias as a behavior that the Google Search engine can recognize.

In 2016, some users Google Searched “three Black teenagers” and images of criminal mugshots of young African American teenagers came up.

Then, the users searched “three White teenagers” and were presented with photos of smiling, happy teenagers.

They also searched for “three Asian teenagers,” and very revealing photos of Asian girls and women appeared.

Benjamin came to the conclusion that these results reflect human prejudice and views on different ethnic groups.

A group of analysts explained the concept of a racist computer program: “The idea here is that computers, unlike people, can’t be racist but we’re increasingly learning that they do in fact take after their makers...Some experts believe that this problem might stem from the hidden biases in the massive piles of data that the algorithms process as they learn to recognize patterns...reproducing our worst values”.[150] As people talk about "googling" rather than searching, the company has taken some steps to defend its trademark, in an effort to prevent it from becoming a generic trademark.[151][152] This has led to lawsuits, threats of lawsuits, and the use of euphemisms, such as calling Google Search a famous web search engine.

TOWeb

TOWeb is a WYSIWYG website creation software for Microsoft Windows and Mac OS X that makes web publishing easy.[1] The latest version 5 creates HTML5/CSS3 responsive websites compatible with any device.

It has some limitations such as one can not open an external web site for editing.[2] The first version of TOWeb was developed during 2 years from 2003 to 2005 under the project name "WebGen".

It was primarily intended for beginners without any HTML knowledge[3] in website creation.

The first version 1.0 was released on August 6, 2005.

The following releases 2.0, 3.0, 4.0...have brought new features like the ability to create online stores.

The latest version 5.0 released on June 2013 follow the Responsive Web Design trend by allowing the creation of websites compatible with any device.[4] TOWeb is available in the following languages: English, French, Italian, Spanish, Portuguese.

divus design web design

Divus Design is a graphic design business located in South East Sydney ...

offering creative services ...

logo design, ...

...

website design, ...

design, brochure desiDivus Design is a graphic design business located in South East Sydney Australia offering creative services including logo design, corporate identity, website design, promotional design, brochure design, documentation & design, business stationery, print design & management and all general graphic design services.We at Divus Design know how important your business identity is and it's more than just your sales team and the people who answer your telephones that portrayes the image of your business.

When not in direct contact with your firm, your customers or potential customers use the impression in their heads or on paper about your business.

It is ussually your corporate look that they think of when they think of you.

The first impression of your company for a customer will probably be your logo, which they will use to make assumstions about your business.

If your an IT company and your logo is sharp swift and technically advanced, thats the exact first impression that your customer will have of you and your business.

- You konw what they say, first impressions last!Our designs are creative and conceptual designs that work well and look good! We design from scratch every time making your corporate identity, logo, covers or advertisments one of a kind, and this is important in a competitive market more so than most would believe.

Most of the time buying a product or service the decision is made subconsiously by the creative side of our brains.

Our eyes pick up the easiest shapes and objects to recognise and the most asthetically pleasing images talk to us the best.

At Divus Design we have few rules and bouderies to our work, however we always structure our creative thinking around these four words - CLEAN - SIMPLE - FRESH - CONCEPTUAL.

Article Tags: Divus Design

divus design

divus design is a graphic design business located in South East Sydney ...

offering creative services ...

logo design, ...

...

website design, ...

design, brochure desidivus design is a graphic design business located in South East Sydney Australia offering creative services including logo design, corporate identity, website design, promotional design, brochure design, documentation & design, business stationery, print design & management and all general graphic design services.

We at divus design know how important your business identity is and it's more than just your sales team and the people who answer your telephones that portrayes the image of your business.

When not in direct contact with your firm, your customers or potential customers use the impression in their heads or on paper about your business.

It is ussually your corporate look that they think of when they think of you.

The first impression of your company for a customer will probably be your logo, which they will use to make assumstions about your business.

If your an IT company and your logo is sharp swift and technically advanced, thats the exact first impression that your customer will have of you and your business.

- You konw what they say, first impressions last!Our designs are creative and conceptual designs that work well and look good! We design from scratch every time making your corporate identity, logo, covers or advertisments one of a kind, and this is important in a competitive market more so than most would believe.

Most of the time buying a product or service the decision is made subconsiously by the creative side of our brains.

Our eyes pick up the easiest shapes and objects to recognise and the most asthetically pleasing images talk to us the best.

At divus design we have few rules and bouderies to our work, however we always structure our creative thinking around these four words - CLEAN - SIMPLE - FRESH - CONCEPTUAL.Our services include:* Logo Design & Corporate Identity* Website design* Corporate Stationery* Annual Reports* Promotional Material* Flyers & Brochures* Newsletters* Packaging* Point-of-sale Marketing* Company & Product Booklets* Posters & Banners Article Tags: divus design, Design Corporate, Corporate Identity

there are so many kinds of design

There are design schools where you learn fashion, tattoo design, graphic design, web design, interior design and architecture schools.

If you are looking into a field of design it is almost hard to pin point which field you would like to go into.

the very basics of design

Design is a very subjective thing, therefore, if you ask me how to come up with a first class design for your marketing collateral or publishing mediums, it would be very unfair for me or anyone else to tell you what is a good design and what is a bad design.......

what to know to choose your web design team web design from vancouver bc to washington dc

An overview of the elements to prepare and what to consider when choosing a web design firm to design your website.

A comprehensive outline of what to look for and how to evaluate a potential web design company or agency.

Know what is important to the web design process and how to streamline the web development process.

things to know about becoming an interior designer

If you are one of those design aspirants who want to take up the interior design as a career, here's a list of things you must and should know about this popular design domain- Interior decoration is a subdomain under interior design and only executes the process of design.Along with skills like technical drawing, space design, material knowledge, furniture design, and familiarisation with interior design tools, you must also have great interpersonal and communication skills, maintain a good network of clients, contractors, and suppliers.With city dwellers increasing day by day and available living spaces dwindling correspondingly, effective use of the available space has become a grave necessity.

And owing to the increased standard of living in India and the new mindset of the younger generation, interior design has become an independent field of its own, coming out of the shadows of architecture and civil engineering.�If you are one of those design aspirants who want to take up interior design as a career, here’s a list of things you must and should know about this popular design domain-Being good with colours, textures, selection and placement of home d�cor objects aren’t sufficient to become an interior designer.

Interior decoration is a subdomain under interior design and only executes the process of design.Interior design is not entirely about design concepts.

Along with skills like technical drawing, space design, material knowledge, furniture design, and familiarisation with interior design tools, you must also have great interpersonal and communication skills, maintain a good network of clients, contractors, and suppliers.Since Interior Design as a career has recently come to the spotlight, it is not wrong to say that there is a scarcity of interior designers in the nation at the moment.

Currently, there is a huge need for interior designers in India.

So if you are planning to take up an interior design as a career now, I would say you are in the right time to shine!The career will give you a long list of reasons to be out of office, whether it is to meet clients, contractors, architects and or to visit the site to review the progress.� So if you are a person who dislikes boring office jobs and like creative jobs where your ideas and spirit is appreciated, interior design is for you!So if still want to be a professional Interior Designer, here is your first step.

Explore the UG pathway in Interior & Spatial Design Degree Course�(Interior Architecture / Interior Design) or PG pathway in Interior Design & Styling�at Pearl Academy.

The Academy provides a perfect balance of traditional plus modern learning with various study trips, industry projects, and international student exchange programs.

what qualifies as affordable website design

As I was searching for a more detailed and technical clarification of the importance of website design I found a company from England that gave me a very detailed explanation.

I knew about a few uses of a good website design, but Website Design Bournemouth showed me the door of a new world for me by clarifying the significance of a professional website design.

They clearly state one thing and they said it very well, affordable website design is a professional website design.

Let us take a look at Website Design Bournemouth explanation and revelation of website design use.

virtual reality in architectural design

Architectural design sector professionals already use a range of established 3D design and visualisation software tools as an embedded part of their overall architectural design process, workflow, and design output.As well as using such 3D tools for 3D architectural modeling, walkthroughs and 3D architectural rendering or realistic computer-generated images, the introduction of BIM a few years ago encouraged further uptake of 3D modelling tools within the architectural design domain.However, BIM despite its many benefits does have its limits.� Firstly, the architect or designer cannot fully immerse himself in his design with traditional 3D models.� Secondly, and in some ways the biggest challenge, the end client who certainly cannot understand 2D drawings or architectural drafting sheets quickly, also struggles to visualize the eventual design solution, even with the aid of 3D images and 3D walkthroughs.� 3D models and BIM models are also not able to allow the client a real appreciation of spatial relationships as 3D rendered images and walkthroughs usually provide a third person, an eye level view where spatial relationships are difficult to apprehend.Virtual Reality (VR) in construction is a tool that it is starting to bridge some of the gaps highlighted above.� The ability to experience a self-controlled route or journey and experience in a proposed environment is something that was restricted to gaming just a few years ago.� Being able to put an architectural design or a range of concepts into such an immersive environment starts to provide many benefits that would otherwise not be possible which is also known as VR in architecture.� The remainder of this article is concerned with design team members (architects) and also the customers or end clients.Considering architects first, whilst the architect has been comfortable enough in the past with his 3D walkthroughs and 3D eye-level images, the potential of being immersed within his own design with the ability to make constant changes is a step ahead.� Whilst VR has up until recently felt like a solo experience, design and design commenting is a more collaborative experience and therefore it opens up more opportunities when combining VR with architectural design.� The ability to mark-up and make suggestions for changes in a model, especially with other users being present in that same of your model, opens up a whole new field of opportunities for the architect.� For example, having the option of changing the lighting or the orientation of a building on a given plot and appreciating that change within a 3D environment allows instant feedback and common experiences that can lead to effective design iteration.

Using VR in architecture will also help the architect and his team to understand the construction, usability and serviceability of certain areas and again these can be highlighted for improvements in design.For customers and clients, the ability to move around a 3D environment is much more realistic as it allows them to feel that they are standing within the building rather than looking at it from a distance.� Most customers will not be too worried about detailed aesthetic appearance, they will be more concerned and will gain more benefit and appreciation of the spatial realism that is offered by VR.� From the designer’s perspective, being able to use VR with the customer will ensure direct feedback based on actual experience is obtained and design issues that may not be apparent until much later on, can be ironed out.In both cases, the use of the VR in the design process is a way to achieve buy-in amongst the design team or from the client.� In order to achieve that buy-in, feedback is required on a regular and instant basis, as and when design updates are made.� This means that the challenge for VR is to provide instantaneous and real-time BIM to VR and back to BIM capability.� It would be of no use were an architect to produce a version of his BIM model, wait for a VR model to be created by a third party and then look at the VR model, as by then a new model may already have been created or significant updates added to the previous model.

In essence, BIM and VR need to have an instantaneous feed to one another is to maximise the usability and the benefit of architectural design.Instantaneous BIM to VR does have some limitations at present however and these are primarily hardware and software related.Whilst the cost of HMDs (head mounted display is) is coming down every year, the cost of the hardware to run the VR models is still fairly excessive compare to the cost and specification of a CAD machine.Creating a VR model in the first place is becoming easier.

Whilst gaming engines such as Unreal Engine have been used for converting BIM models to date, there are a new breed of BIM to video applications that are now being developed.

The aim being to provide instantaneous been to VR availability.� However, many of these are recent and therefore in their infancy in terms of development.

Autodesk Live is one such Revit to BIM application.

It is powered by the cloud, is easy to use and is also compatible with BIM and VR.

It allows users to take Revit model into a virtual reality space quickly and easily.� Other tools include Iris VR and inscape, both real-time visualization tools that work with Revit, again with their own pros and cons.In terms of VR adoption in architecture then, it is clear that customer demand has driven take up in the sector.� The ability to bring 3D design into a VR environment and the collaborative development of the design from that point is still in it’s infancy.� However, just as collaboration with the extended design team has worked to date, the use of VR will only improve that collaboration but also allow for customers to comment on a design and therefore avoid design surprises during the build phase.� As an experienced design partner, XS CAD continues to create BIM models that provide compatibility with VR models.� XS CAD is also able to provide VR modeling support to customers in the homebuilding, retail and engineering sectors, using its considerable modeling and BIM support in those sectors to provide a solution that is in line with the current state of VR technology in the AEC sector.

web design course from an education institution or a design agency

List of different sources where you can find web design courses including University, Colleges and Web design companies.

uk graphic design and web design what does the future of crowdsourcing hold

It would be nieve to think that the UK is the only country capable of fielding graphic design talent in this global economy.

So what does the next few years herald for the UK in terms of graphic design jobs and the widespread outsourcing of design to foreign territories?

fundamentals of good web design

There are no ...

...

for Web design, but that�s a shame.

While novel and ...

...

design is to be ...

the bottom line for most sites is ...

When the design startsThere are no objective standards for Web design, but that�s a shame.

While novel and inventive interface design is to be encouraged, the bottom line for most sites is usability.

When the design starts to intrude on usefulness, the decisions is easy � make it easy for the user.

Without delving heavily into the programming nuts and bolts of design implementation, we offer the following modest proposals:1.

Use Consistent NavigationGive the users consistent navigation throughout the site.

The importance of this simple point can�t be overstated, as newbies invariably get lost.

Moreover, you should try to accommodate users with old systems and users with disabilities.

Some users disable java, and others use text only browsers, so provide text only nav buttons to accommodate all users (or provide an alternate site).2.

Provide a Site MapJust plain common courtesy, if you ask me.

When I am in a hurry, the last thing I want to do is dig through a hierarchical Web site structure to search for something that I know exists on the site.3.

Provide a Contacts PageYou would be amazed at how many companies have ZERO contact information on their Web sites.

Moreover, a generic e-mail link is NOT sufficient; you need to give people addresses, phone numbers, etc.

In order for the Web to deliver on its promise, it must be used to increase the transparency of organizations.4.

Listen to the UsersGive your users a method for providing feedback.

It�s true, people rarely use the feedback option, but its also true they really hate it when they are not given the option.

The usability of your feedback system is a key when problems strike; a good system eases tensions and a bad system escalates the tensions dramatically.

(Do we need to point out that timely response to feedback forms is also a necessity?)5.

Build an Intuitive InterfaceThe Ideal Interface must meet two criteria: (1) Newbies must be confronted with an easy-to-learn consistent system while, (2) Experienced users should be able to navigate the site quickly � the design should not impede or interfere navigation by an experienced user who is familiar with the site.6.

Provide FAQsIf your site generates a lot of questions, has complex content systems, you should include an FAQ that provides answers to the most common issues.

Trust us, this feature will save you AND your users time.7.

Strive for Compelling ContentO.K., so this isn�t exactly a true �design� point, it still must be mentioned: You must give users a reason to return.8.

Insist on Quick AccessBuilding a page that looks good and loads quickly is not the easiest of jobs.

Add into the equation the labyrinthine nature of some of the connections between you and the Web page server, it is not surprising that page loading times vary wildly.

Still there are things your designer can do.

Try 15 Second Rule: If the page doesn�t load in 15 seconds, it is too big.

Tell your Web team to decrease file sizes.9.

Strive for SimplicityMake simple, common tasks easy to do.

When long procedures are necessary for new users, meaningful shortcuts should be provided for experienced users.10.

Provide FeedbackA well-designed site should give users feedback in response to user input, errors, and changes in status.

The information should be communicated simply ,with an indication of what options are available to the user.11.

Be TolerantThe site should be tolerant of errors and unusual usage.

Beta testing of the site should encompass anticipating a wide variety of erroneous or atypical user behaviors.

While it is probably impossible to anticipate all possible mis-uses, the site should handle mistakes with grace and, when possible, provide the user with guidance.

written by: Ric Shreves,http://www.waterandstone.com Source: Free Articles from ArticlesFactory.com

5 ultimate graphic design mistakes things that graphic designers should avoid at all costs

With many young designers coming from a pre-dominantly web design background the transfer over from web design to traditional design for print can bring with it a multitude of design sins.

the main elements of web design

What constitutes a good website design? Does it showcase your design prowess? Does it prove what a brilliant graphic designer you are? Does your website design fetch you designing awards? Or does your website design exist to establish a platform for you and your visitors to interact with each other unhindered by usability glitches?

lets go design in 3d

It is true that 3D model is a better approach to design then 2D.

This is due to the fact that the design can be rectified and perfected when it can be viewed in multiple perspectives and by simulating assembly of the final product.3D can also help in identifying problems related to DFMA (Design For Manufacturing and Assembly), which is not possible in traditional 2D design approach.

graphic design contest a channel for creativity

Contests are held for graphic design of any kind these days; t shirts, slogan and logo design, graphic poster design, website design or any other kind of graphic design.

There is a great chance to express creativity, make money and friends and hang out with likeminded people as well.A Graphics Design Contest may be in respect of any kind of graphic design; such as slogan and logo design, graphic poster design, website design, or just about anything.

What is more, anyone not even trained in graphic design can send their entry and participate in all these online competitions.

�How would you like to design a t shirt for instance? How would you like to wear a tee shirt that has been designed by you? Well this is a trend that is getting more and more popular on the internet these days.

It works like this: tee shirt design contests are held by a lot of sites, where anyone can enter and participate by giving their own entry or tshirt design.

These graphic design contests are a great way to express creativity.

It gives people a chance to express their creativity, sense of humor, or even their existential angst! There are many sites that hold such contests.

Inkfruit.com is an interesting one with attractive cash prizes for winners and fame in the form of their name being printed on the label.

Contests2Win.com is another one that holds design and photography contests regularly.

�Many of the sites that conduct graphic design contests have some very substantial prizes in cash or sometimes in kind which are up for grabs for the winning entry which are a great incentive.

Another interesting thing about design contests is that you can visit the site and check out the forums that are often set up for likeminded people to hang out and exchange thoughts, ideas, news, banter or just chat and make friends.

�Design forums or t shirt forums are places where you can view and post questions, communicate publicly or privately with members or moderators of the community etc.

You can also set up or respond to or participate in polls.

You can also sell and buy items of interest or from the classifieds that are set up on the forums.

As a member of the forum you can take part in all the contests and giveaways and access many other special features.

This is a great way to become part of a community where you can learn, share tips and experiences and get help from likeminded people as well as network for your own benefit.

�Forums are a great networking tool and a way to keep abreast of what is going on in terms of new contests, announcements as to winner and prizes etc.

Seeing other peoples� designs, especially the winning entries can be inspiring and give you ideas for creating your own graphic design.

�The great thing about these forums is that you can learn a lot and take the benefit of other people�s experiences.

Marketing, business and finance are other matters which can be discussed which can be very beneficial.

So don�t hide away your creativity, express it! And don�t think that you cannot do it, you can! Original ideas always get a lot of appreciation and perhaps it may be you that gets the appreciation as well as the prize money next time!

seo web design what 547414

importance seo web design 285541

why choose professional seo web design services 621542

what seo web design how can acquired 1106388

tips seo friendly web design 904724

seo web design search engines determine relevance impact web information 768099

seo web design affordable prices 726325

5 top recommendations wonderful web design 569078

how choose seo web design firm 482280

obtain higher rankings seo web design services 472840

why you should give more preference web design 369469

importance web design websites 349492

reason why web designs require sound thinking 337062

hire professional web designing firm top search engine pages 279337

top 10 seo web design companies websites jaipur india 1529467

Local SEO Somerset | +44 7976 625722

Contact Us Today!

+44 7976 625722

49B Bridle Way, Barwick, Yeovil, BA22 9TN

https://sites.google.com/site/localseoservicesgold/

http://www.localseoservicescompany.co.uk/