Cheap SEO Packages

Local SEO Somerset | +44 7976 625722

Contact Us Today!

+44 7976 625722

49B Bridle Way, Barwick, Yeovil, BA22 9TN

https://sites.google.com/site/localseoservicesgold/

http://www.localseoservicescompany.co.uk/

Search engine marketing

Search engine marketing (SEM) is a form of Internet marketing that involves the promotion of websites by increasing their visibility in search engine results pages (SERPs) primarily through paid advertising.[1] SEM may incorporate search engine optimization (SEO), which adjusts or rewrites website content and site architecture to achieve a higher ranking in search engine results pages to enhance pay per click (PPC) listings.[2] In 2007, U.S.

advertisers spent US $24.6 billion on Search engine marketing.[3] In Q2 2015, Google (73.7%) and the Yahoo/Bing (26.3%) partnership accounted for almost 100% of U.S.

search engine spend.[4] As of 2006, SEM was growing much faster than traditional advertising and even other channels of online marketing.[5] Managing search campaigns is either done directly with the SEM vendor or through an SEM tool provider.

It may also be self-serve or through an advertising agency.

As of October 2016, Google leads the global search engine market with a market share of 89.3%.

Bing comes second with a market share of 4.36%, Yahoo comes third with a market share of 3.3%, and Chinese search engine Baidu is fourth globally with a share of about 0.68%.[6] As the number of sites on the Web increased in the mid-to-late 1990s, search engines started appearing to help people find information quickly.

Search engines developed business models to finance their services, such as pay per click programs offered by Open Text[7] in 1996 and then Goto.com[8] in 1998.

Goto.com later changed its name[9] to Overture in 2001, was purchased by Yahoo! in 2003, and now offers paid search opportunities for advertisers through Yahoo! Search Marketing.

Google also began to offer advertisements on search results pages in 2000 through the Google AdWords program.

By 2007, pay-per-click programs proved to be primary moneymakers[10] for search engines.

In a market dominated by Google, in 2009 Yahoo! and Microsoft announced the intention to forge an alliance.

The Yahoo! & Microsoft Search Alliance eventually received approval from regulators in the US and Europe in February 2010.[11] Search engine optimization consultants expanded their offerings to help businesses learn about and use the advertising opportunities offered by search engines, and new agencies focusing primarily upon marketing and advertising through search engines emerged.

The term "Search engine marketing" was popularized by Danny Sullivan in 2001[12] to cover the spectrum of activities involved in performing SEO, managing paid listings at the search engines, submitting sites to directories, and developing online marketing strategies for businesses, organizations, and individuals.

Search engine marketing uses at least five methods and metrics to optimize websites.[citation needed] Search engine marketing is a way to create and edit a website so that search engines rank it higher than other pages.

It should be also focused on keyword marketing or pay-per-click advertising (PPC).

The technology enables advertisers to bid on specific keywords or phrases and ensures ads appear with the results of search engines.

With the development of this system, the price is growing under a high level of competition.

Many advertisers prefer to expand their activities, including increasing search engines and adding more keywords.

The more advertisers are willing to pay for clicks, the higher the ranking for advertising, which leads to higher traffic.[15] PPC comes at a cost.

The higher position is likely to cost $5 for a given keyword, and $4.50 for a third location.

A third advertiser earns 10% less than the top advertiser while reducing traffic by 50%.[15] Investors must consider their return on investment when engaging in PPC campaigns.

Buying traffic via PPC will deliver a positive ROI when the total cost-per-click for a single conversion remains below the profit margin.

That way the amount of money spent to generate revenue is below the actual revenue generated.[16] A positive ROI is the outcome.

There are many reasons explaining why advertisers choose the SEM strategy.

First, creating a SEM account is easy and can build traffic quickly based on the degree of competition.

The shopper who uses the search engine to find information tends to trust and focus on the links showed in the results pages.

However, a large number of online sellers do not buy search engine optimization to obtain higher ranking lists of search results but prefer paid links.

A growing number of online publishers are allowing search engines such as Google to crawl content on their pages and place relevant ads on it.[17] From an online seller's point of view, this is an extension of the payment settlement and an additional incentive to invest in paid advertising projects.

Therefore, it is virtually impossible for advertisers with limited budgets to maintain the highest rankings in the increasingly competitive search market.

Google's Search engine marketing is one of the western world's marketing leaders, while its Search engine marketing is its biggest source of profit.[18] Google's search engine providers are clearly ahead of the Yahoo and Bing network.

The display of unknown search results is free, while advertisers are willing to pay for each click of the ad in the sponsored search results.

Paid inclusion involves a search engine company charging fees for the inclusion of a website in their results pages.

Also known as sponsored listings, paid inclusion products are provided by most search engine companies either in the main results area or as a separately identified advertising area.

The fee structure is both a filter against superfluous submissions and a revenue generator.

Typically, the fee covers an annual subscription for one webpage, which will automatically be catalogued on a regular basis.

However, some companies are experimenting with non-subscription based fee structures where purchased listings are displayed permanently.

A per-click fee may also apply.

Each search engine is different.

Some sites allow only paid inclusion, although these have had little success.

More frequently, many search engines, like Yahoo!,[19] mix paid inclusion (per-page and per-click fee) with results from web crawling.

Others, like Google (and as of 2006, Ask.com[20][21]), do not let webmasters pay to be in their search engine listing (advertisements are shown separately and labeled as such).

Some detractors of paid inclusion allege that it causes searches to return results based more on the economic standing of the interests of a web site, and less on the relevancy of that site to end-users.

Often the line between pay per click advertising and paid inclusion is debatable.

Some have lobbied for any paid listings to be labeled as an advertisement, while defenders insist they are not actually ads since the webmasters do not control the content of the listing, its ranking, or even whether it is shown to any users.

Another advantage of paid inclusion is that it allows site owners to specify particular schedules for crawling pages.

In the general case, one has no control as to when their page will be crawled or added to a search engine index.

Paid inclusion proves to be particularly useful for cases where pages are dynamically generated and frequently modified.

Paid inclusion is a Search engine marketing method in itself, but also a tool of search engine optimization since experts and firms can test out different approaches to improving ranking and see the results often within a couple of days, instead of waiting weeks or months.

Knowledge gained this way can be used to optimize other web pages, without paying the search engine company.

SEM is the wider discipline that incorporates SEO.

SEM includes both paid search results (using tools like Google Adwords or Bing Ads, formerly known as Microsoft adCenter) and organic search results (SEO).

SEM uses paid advertising with AdWords or Bing Ads, pay per click (particularly beneficial for local providers as it enables potential consumers to contact a company directly with one click), article submissions, advertising and making sure SEO has been done.

A keyword analysis is performed for both SEO and SEM, but not necessarily at the same time.

SEM and SEO both need to be monitored and updated frequently to reflect evolving best practices.

In some contexts, the term SEM is used exclusively to mean pay per click advertising,[2] particularly in the commercial advertising and marketing communities which have a vested interest in this narrow definition.

Such usage excludes the wider search marketing community that is engaged in other forms of SEM such as search engine optimization and search retargeting.

Creating the link between SEO and PPC represents an integral part of the SEM concept.

Sometimes, especially when separate teams work on SEO and PPC and the efforts are not synced, positive results of aligning their strategies can be lost.

The aim of both SEO and PPC is maximizing the visibility in search and thus, their actions to achieve it should be centrally coordinated.

Both teams can benefit from setting shared goals and combined metrics, evaluating data together to determine future strategy or discuss which of the tools works better to get the traffic for selected keywords in the national and local search results.

Thanks to this, the search visibility can be increased along with optimizing both conversions and costs.[22] Another part of SEM is social media marketing (SMM).

SMM is a type of marketing that involves exploiting social media to influence consumers that one company’s products and/or services are valuable.[23] Some of the latest theoretical advances include Search engine marketing management (SEMM).

SEMM relates to activities including SEO but focuses on return on investment (ROI) management instead of relevant traffic building (as is the case of mainstream SEO).

SEMM also integrates organic SEO, trying to achieve top ranking without using paid means to achieve it, and pay per click SEO.

For example, some of the attention is placed on the web page layout design and how content and information is displayed to the website visitor.

SEO & SEM are two pillars of one marketing job and they both run side by side to produce much better results than focusing on only one pillar.

Paid search advertising has not been without controversy and the issue of how search engines present advertising on their search result pages has been the target of a series of studies and reports[24][25][26] by Consumer Reports WebWatch.

The Federal Trade Commission (FTC) also issued a letter[27] in 2002 about the importance of disclosure of paid advertising on search engines, in response to a complaint from Commercial Alert, a consumer advocacy group with ties to Ralph Nader.

Another ethical controversy associated with search marketing has been the issue of trademark infringement.

The debate as to whether third parties should have the right to bid on their competitors' brand names has been underway for years.

In 2009 Google changed their policy, which formerly prohibited these tactics, allowing 3rd parties to bid on branded terms as long as their landing page in fact provides information on the trademarked term.[28] Though the policy has been changed this continues to be a source of heated debate.[29] On April 24, 2012, many started to see that Google has started to penalize companies that are buying links for the purpose of passing off the rank.

The Google Update was called Penguin.

Since then, there have been several different Penguin/Panda updates rolled out by Google.

SEM has, however, nothing to do with link buying and focuses on organic SEO and PPC management.

As of October 20, 2014, Google had released three official revisions of their Penguin Update.

In 2013, the Tenth Circuit Court of Appeals held in Lens.com, Inc.

v.

1-800 Contacts, Inc.

that online contact lens seller Lens.com did not commit trademark infringement when it purchased search advertisements using competitor 1-800 Contacts' federally registered 1800 CONTACTS trademark as a keyword.

In August 2016, the Federal Trade Commission filed an administrative complaint against 1-800 Contacts alleging, among other things, that its trademark enforcement practices in the Search engine marketing space have unreasonably restrained competition in violation of the FTC Act.

1-800 Contacts has denied all wrongdoing and appeared before an FTC administrative law judge in April 2017.[30] AdWords is recognized as a web-based advertising utensil since it adopts keywords that can deliver adverts explicitly to web users looking for information in respect to a certain product or service.

It is flexible and provides customizable options like Ad Extensions, access to non-search sites, leveraging the display network to help increase brand awareness.

The project hinges on cost per click (CPC) pricing where the maximum cost per day for the campaign can be chosen, thus the payment of the service only applies if the advert has been clicked.

SEM companies have embarked on AdWords projects as a way to publicize their SEM and SEO services.

One of the most successful approaches to the strategy of this project was to focus on making sure that PPC advertising funds were prudently invested.

Moreover, SEM companies have described AdWords as a practical tool for increasing a consumer’s investment earnings on Internet advertising.

The use of conversion tracking and Google Analytics tools was deemed to be practical for presenting to clients the performance of their canvas from click to conversion.

AdWords project has enabled SEM companies to train their clients on the utensil and delivers better performance to the canvass.

The assistance of AdWord canvass could contribute to the growth of web traffic for a number of its consumer’s websites, by as much as 250% in only nine months.[31] Another way Search engine marketing is managed is by contextual advertising.

Here marketers place ads on other sites or portals that carry information relevant to their products so that the ads jump into the circle of vision of browsers who are seeking information from those sites.

A successful SEM plan is the approach to capture the relationships amongst information searchers, businesses, and search engines.

Search engines were not important to some industries in the past, but over the past years the use of search engines for accessing information has become vital to increase business opportunities.[32] The use of SEM strategic tools for businesses such as tourism can attract potential consumers to view their products, but it could also pose various challenges.[33] These challenges could be the competition that companies face amongst their industry and other sources of information that could draw the attention of online consumers.[32] To assist the combat of challenges, the main objective for businesses applying SEM is to improve and maintain their ranking as high as possible on SERPs so that they can gain visibility.

Therefore, search engines are adjusting and developing algorithms and the shifting criteria by which web pages are ranked sequentially to combat against search engine misuse and spamming, and to supply the most relevant information to searchers.[32] This could enhance the relationship amongst information searchers, businesses, and search engines by understanding the strategies of marketing to attract business.

White-label product

A White-label product is a product or service produced by one company (the producer) that other companies (the marketers) rebrand to make it appear as if they had made it.[1][2] The name derives from the image of a white label on the packaging that can be filled in with the marketer's trade dress.

Its origins can be traced to vinyl records.[3] Before records were to be released to the public, often before official artwork was designed and printed, promotional copies were sent out in a white sleeve to DJs to solicit radio and nightclub play, in an effort to build hype and gauge public interest by the record labels, ultimately to better estimate manufacturing quantities.

This created a situation where certain respected or well-connected DJs would have exclusive copies of material, immediately increasing demand on certain big records.

White label production is often used for mass-produced generic products including electronics,[4] consumer products and software packages[5] such as DVD players, televisions, and web applications.

Some companies maintain a sub-brand for their goods, for example the same model of DVD player may be sold by Dixons as a Saisho and by Currys as a Matsui, which are brands exclusively used by those companies.[6] Some websites use white labels to enable a successful brand to offer a service without having to invest in creating the technology and infrastructure itself.

Many IT and modern marketing companies outsource or use white-label companies and services to provide specialist services without having to invest in developing their own product.

Digital Marketing companies like Conduit Digital provide white label services like Over-The-Top advertising (OTT), SEO, Paid Search, Paid Social, Programmatic Video, Programmatic Display and many more to other advertising agencies under a white-label package.

Most supermarket store brand products are provided by companies that sell to multiple supermarkets, changing only the labels.

In addition, some manufacturers create low-cost generic brand labels with only the name of the product ("Beer","Cola", etc.).

Richelieu Foods, for example, is a private label food manufacturing company producing frozen pizza, salad dressing, sauces, marinades, condiments, and deli salads for other companies, including Hy-Vee, Aldi, Save-A-Lot, Sam's Club,[7] Hannaford Brothers Co.,[8] BJ's Wholesale Club (Earth's Pride brand), and Shaw's Supermarkets (Culinary Circle brand).[8] Smaller banks sometimes outsource their credit card or check processing operations to larger banks, which issue and process the credit cards as white label cards, typically for a fee, allowing the smaller bank to brand the cards as their own without having to invest in the necessary infrastructure.

For example, Cuscal Limited provides white label card and transactional products to Credit Unions in Australia; Simple (formerly BankSimple) issues bank accounts and debit cards operated by The Bancorp Bank and BBVA Compass in the United States.[9] In Southern California, City National Bank is the largest check processor in that half of the state, because in addition to checks issued by its own customers, CNB processes checks for the customers of more than 60 smaller Southern California banks.

Many software companies offer white label software to agencies or other customers, including the possibility to resell the software under the customer’s brand.

This typically requires functionalities such as the adaptation of the software’s visual appearance, multi-customer management and automatic billing to the end-customers based on usage parameters.

Luxury goods

In economics, a luxury good (or upmarket good) is a good for which demand increases more than proportionally as income rises, so that expenditures on the good become a greater proportion of overall spending.

Luxury goods are in contrast to necessity goods, where demand increases proportionally less than income.[1] Luxury goods is often used synonymously with superior goods.

The word "luxury" originated from the Latin word luxus, which means indulgence of the senses, regardless of cost.[2] A luxury good can be identified by comparing the demand for the good at one point in time against the demand for the good at a different point in time, with a different income level.

When income goes up, demand for Luxury goods goes up even more than income went up.

When income goes down, demand for Luxury goods go down even more than income went down.[3] For example, if income goes up 1%, and the demand for a product goes up 2%, then the product is a luxury good.

This contrasts with basic goods, for which demand stays the same or decreases only slightly as income goes down.[3] Luxury goods have high income elasticity of demand: as people become wealthier, they will buy proportionately more Luxury goods.

This also means, however, that should there be a decline in income its demand will drop more than proportionately.

Income elasticity of demand is not constant with respect to income, and may change sign at different levels of income.

That is to say, a luxury good may become a necessity good or even an inferior good at different income levels.

Some luxury products have been claimed to be examples of Veblen goods, with a positive price elasticity of demand: for example, making a perfume more expensive can increase its perceived value as a luxury good to such an extent that sales can go up, rather than down.

However, it is important to note that Veblen goods are not the same as Luxury goods in general.

Although the technical term luxury good is independent of the goods' quality, they are generally considered to be goods at the highest end of the market in terms of quality and price.

Classic Luxury goods include haute couture clothing, accessories, and luggage.[4] Many markets have a luxury segment including, for example, luxury versions of automobiles, yachts, wine, bottled water, coffee, tea, foods, watches, clothes, jewelry, and high fidelity sound equipment.[4] Luxuries may be services.

The hiring of full-time or live-in domestic servants is a luxury reflecting disparities of income.

Some financial services, especially in some brokerage houses, can be considered luxury services by default because persons in lower-income brackets generally do not use them.

Luxury goods often have special luxury packaging to differentiate the products from mainstream competitors.

Originally, Luxury goods were available only to the very wealthy and "aristocratic world of old money" that offered them a history of tradition, superior quality, along with a pampered buying experience.[5] Luxury goods have been transformed by a shift from custom-made (bespoke) works with exclusive distribution practices by specialized, quality-minded family-run and small businesses to a mass production of specialty branded goods by profit-focused large corporations and marketers.[5] The trend in modern luxury is simply a product or service that is marketed, packaged, and sold by global corporations that are focused "on growth, visibility, brand awareness, advertising, and, above all, profits."[5] Increasingly, luxury logos now available to all consumers at a premium price across the world including online.[6] The three dominant trends are the main factors that have accelerated the rapid growth of the industry, including the customer base and variations in the consumptions of different brands.[7] The three dominant trends in the global Luxury goods market are globalization, consolidation, and diversification.[8] Consolidation involves the growth of big companies and ownership of brands across many segments of luxury products.

Examples include LVMH, Richemont, and Kering, which dominate the market in areas ranging from luxury drinks to fashion and cosmetics.[9] Global consumer companies, such as Procter & Gamble, are also attracted to the industry, due to the difficulty of making a profit in the mass consumer goods market.[10] The customer base for various Luxury goods continue to be more culturally diversified, and this presents more unseen challenges and new opportunities to companies in this industry.[7] The Luxury goods market has been on an upward climb for many years.

Apart from the setback caused by the 1997 Asian Financial Crisis, the industry has performed well, particularly in 2000.

In that year, the world Luxury goods market – which includes drinks, fashion, cosmetics, fragrances, watches, jewelry, luggage, handbags – was worth close to $170 billion and grew 7.9 percent.[11] The United States has been the largest regional market for Luxury goods and is estimated to continue to be the leading personal Luxury goods market in 2013, with a value of 62.5 billion euros.[12] The largest sector in this category was luxury drinks, including premium whisky, champagne, and cognac.

This sector was the only one that suffered a decline in value (-0.9 percent).[citation needed] The watches and jewelry section showed the strongest performance, growing in value by 23.3 percent, while the clothing and accessories section grew 11.6 percent between 1996 and 2000, to $32.8 billion.

North America is the largest regional market for Luxury goods.

The largest ten markets for Luxury goods account for 83 percent of overall sales, and include Japan, China, United States, Russia, Germany, Italy, France, United Kingdom, Brazil, Spain, and Switzerland.[citation needed] In 2012, China surpassed Japan as the world's largest luxury market.[13] China's luxury consumption accounts for over 25% of the global market.[14] The Economist Intelligence Unit published a report on the outlook for Luxury goods in Asia[15] which explores the trends and forecasts for the Luxury goods of China market across key markets in Asia.

In 2014, the luxury sector is expected to grow over the next 10 years because of 440 million consumers spending a total of 880 billion euros, or $1.2 trillion.[16] Though often verging on the meaningless in modern marketing, "luxury" remains a legitimate and current technical term in art history for objects that are especially highly decorated to very high standards and use expensive materials.

The term is especially used for medieval manuscripts to distinguish between practical working books for normal use, and fully illuminated manuscripts, that were often bound in treasure bindings with metalwork and jewels.

These are often much larger, with less text on each page and many illustrations, and if liturgical texts were originally usually kept on the altar or sacristy rather any library that the church or monastery who owned them may have had.

Secular luxury manuscripts were commissioned by the very wealthy and differed in the same ways from cheaper books.[citation needed] "Luxury" may be used for other applied arts where both utilitarian and luxury versions of the same types of objects were made.

This might cover metalwork, ceramics, glass, arms and armour, and a wide range of objects.

It is much less used for objects with no function beyond being an artwork: paintings, drawings and sculpture, even though the disparity in cost between an expensive and cheap work may have been as large.[citation needed] With increasing "democratization" of Luxury goods,[17] new product categories have been created within the luxury market, called "accessible luxury" or "mass luxury".

These are meant specifically for the middle class, which sometimes called the "aspiring class" in this context.

Because luxury has now diffused into the masses, defining the word has become more difficult.[18] Bringing up to the modern day the long and generally very unsuccessful history of sumptuary laws designed to curb excessive personal consumption, in February 2013, the Chinese government banned advertisements for Luxury goods on its official state radio and television channels.[19] Several manufactured products attain the status of "Luxury goods" due to their design, quality, durability or performance that are remarkably superior to the comparable substitutes.[20] Thus, virtually every category of goods available on the market today includes a subset of similar products whose "luxury" is marked by better-quality components and materials, solid construction, stylish appearance, increased durability, better performance, advanced features, and so on.

As such, these Luxury goods may retain or improve the basic functionality for which all items of a given category are originally designed.[citation needed] There are also goods that are perceived as luxurious by the public simply because they play a role of status symbols as such goods tend to signify the purchasing power of those who acquire them.[citation needed] These items, while not necessarily being better (in quality, performance, or appearance) than their less expensive substitutes, are purchased with the main purpose of displaying wealth or income of their owners.[citation needed] These kinds of goods are the objects of a socio-economic phenomenon called conspicuous consumption and commonly include luxury vehicles, watches, jewelry, designer clothing, yachts, as well as large residences, urban mansions, and country houses.[citation needed] The idea of a luxury brand is not necessarily a product or a price point, but a mindset where core values that are expressed by a brand are directly connected to the producer's dedication and alignment to perceptions of quality with its customers' values and aspirations.[21] Thus, it is these target customers, not the product, that make a luxury brand.[21] Brands that are considered luxury connect with their customers by communicating that they are the top of their class or considered the best in their field.[22] Furthermore, these brands must deliver - in some meaningful way - measurably better performance.[22] What consumers perceive as luxurious brands and products change over the years, but there appear to be three main drivers: (1) a high price, especially when compared to other brands within its segment; (2) limited supply, in that a brand may not need to be expensive, but it arguably should not be easily obtainable and contributing to the customers' feeling that they have something special; and (3) endorsement by celebrities, which can make a brand or particular products more appealing for consumers and thus more "luxurious" in their minds.[23] Two additional elements of luxury brands include special packaging and personalization.[23] These differentiating elements distance the brands from the mass market and thus provide them with a unique feeling and user experience as well as a special and memorable "luxury feel" for customers.[23] However, the concept of a luxury brand is now so popular that it is used in almost every retail, manufacturing, and service sector.[24] Moreover, new marketing concepts such as "mass-luxury" or "hyper luxury" further blur the definition of what is a luxury product, a luxury brand, or a luxury company.[24] Examples include LVMH (Louis Vuitton Moet Hennessy), the largest luxury good producer in the world with over fifty brands, including Louis Vuitton, the brand with the world's first fashion designer label.[25] The LVMH group made a net profit of €8.1 billion on sales of €42.6 billion in 2017.

[26] Other market leaders[citation needed] include Richemont and Kering (previously named PPR).

A rather small group in comparison, the wealthy tend to be extremely influential.[citation needed] Once a brand gets an "endorsement" from members of this group, then the brand can be defined as a true "luxury" brand.

An example of different product lines in the same brand is found in the automotive industry, with "entry-level" cars marketed to younger, less wealthy consumers, and higher-cost models for older and more wealthy consumers.[citation needed] The advertising expenditure for the average luxury brand is 5-15% of sales revenue, or about 25% with the inclusion of other communications such as public relations, events and sponsorships.[27] Since the development of mass-market "luxury" brands in the 1800s, department stores dedicated to selling all major luxury brands have opened up in most major cities around the world.

Le Bon Marché in Paris, France is credited as one of the first of its kind.

In the United States, the development of luxury-oriented department stores not only changed the retail industry, but also ushered the idea of freedom through consumerism and a new opportunity for middle- and upper-class women.[28] Fashion brands within the Luxury goods market tend to be concentrated in exclusive or affluent districts of cities around the world.

These include:

Web design

Web design encompasses many different skills and disciplines in the production and maintenance of websites.

The different areas of Web design include web graphic design; interface design; authoring, including standardised code and proprietary software; user experience design; and search engine optimization.

Often many individuals will work in teams covering different aspects of the design process, although some designers will cover them all.[1] The term "Web design" is normally used to describe the design process relating to the front-end (client side) design of a website including writing markup.

Web design partially overlaps web engineering in the broader scope of web development.

Web designers are expected to have an awareness of usability and if their role involves creating markup then they are also expected to be up to date with web accessibility guidelines.

Although Web design has a fairly recent history.

It can be linked to other areas such as graphic design, user experience, and multimedia arts, but is more aptly seen from a technological standpoint.

It has become a large part of people's everyday lives.

It is hard to imagine the Internet without animated graphics, different styles of typography, background, and music.

In 1989, whilst working at CERN Tim Berners-Lee proposed to create a global hypertext project, which later became known as the World Wide Web.

During 1991 to 1993 the World Wide Web was born.

Text-only pages could be viewed using a simple line-mode browser.[2] In 1993 Marc Andreessen and Eric Bina, created the Mosaic browser.

At the time there were multiple browsers, however the majority of them were Unix-based and naturally text heavy.

There had been no integrated approach to graphic design elements such as images or sounds.

The Mosaic browser broke this mould.[3] The W3C was created in October 1994 to "lead the World Wide Web to its full potential by developing common protocols that promote its evolution and ensure its interoperability."[4] This discouraged any one company from monopolizing a propriety browser and programming language, which could have altered the effect of the World Wide Web as a whole.

The W3C continues to set standards, which can today be seen with JavaScript and other languages.

In 1994 Andreessen formed Mosaic Communications Corp.

that later became known as Netscape Communications, the Netscape 0.9 browser.

Netscape created its own HTML tags without regard to the traditional standards process.

For example, Netscape 1.1 included tags for changing background colours and formatting text with tables on web pages.

Throughout 1996 to 1999 the browser wars began, as Microsoft and Netscape fought for ultimate browser dominance.

During this time there were many new technologies in the field, notably Cascading Style Sheets, JavaScript, and Dynamic HTML.

On the whole, the browser competition did lead to many positive creations and helped Web design evolve at a rapid pace.[5] In 1996, Microsoft released its first competitive browser, which was complete with its own features and HTML tags.

It was also the first browser to support style sheets, which at the time was seen as an obscure authoring technique and is today an important aspect of Web design.[5] The HTML markup for tables was originally intended for displaying tabular data.

However designers quickly realized the potential of using HTML tables for creating the complex, multi-column layouts that were otherwise not possible.

At this time, as design and good aesthetics seemed to take precedence over good mark-up structure, and little attention was paid to semantics and web accessibility.

HTML sites were limited in their design options, even more so with earlier versions of HTML.

To create complex designs, many Web designers had to use complicated table structures or even use blank spacer .GIF images to stop empty table cells from collapsing.[6] CSS was introduced in December 1996 by the W3C to support presentation and layout.

This allowed HTML code to be semantic rather than both semantic and presentational, and improved web accessibility, see tableless Web design.

In 1996, Flash (originally known as FutureSplash) was developed.

At the time, the Flash content development tool was relatively simple compared to now, using basic layout and drawing tools, a limited precursor to ActionScript, and a timeline, but it enabled Web designers to go beyond the point of HTML, animated GIFs and JavaScript.

However, because Flash required a plug-in, many web developers avoided using it for fear of limiting their market share due to lack of compatibility.

Instead, designers reverted to gif animations (if they didn't forego using motion graphics altogether) and JavaScript for widgets.

But the benefits of Flash made it popular enough among specific target markets to eventually work its way to the vast majority of browsers, and powerful enough to be used to develop entire sites.[6] During 1998 Netscape released Netscape Communicator code under an open source licence, enabling thousands of developers to participate in improving the software.

However, they decided to start from the beginning, which guided the development of the open source browser and soon expanded to a complete application platform.[5] The Web Standards Project was formed and promoted browser compliance with HTML and CSS standards by creating Acid1, Acid2, and Acid3 tests.

2000 was a big year for Microsoft.

Internet Explorer was released for Mac; this was significant as it was the first browser that fully supported HTML 4.01 and CSS 1, raising the bar in terms of standards compliance.

It was also the first browser to fully support the PNG image format.[5] During this time Netscape was sold to AOL and this was seen as Netscape's official loss to Microsoft in the browser wars.[5] Since the start of the 21st century the web has become more and more integrated into peoples lives.

As this has happened the technology of the web has also moved on.

There have also been significant changes in the way people use and access the web, and this has changed how sites are designed.

Since the end of the browsers wars[when?] new browsers have been released.

Many of these are open source meaning that they tend to have faster development and are more supportive of new standards.

The new options are considered by many[weasel words] to be better than Microsoft's Internet Explorer.

The W3C has released new standards for HTML (HTML5) and CSS (CSS3), as well as new JavaScript API's, each as a new but individual standard.[when?] While the term HTML5 is only used to refer to the new version of HTML and some of the JavaScript API's, it has become common to use it to refer to the entire suite of new standards (HTML5, CSS3 and JavaScript).

Web designers use a variety of different tools depending on what part of the production process they are involved in.

These tools are updated over time by newer standards and software but the principles behind them remain the same.

Web designers use both vector and raster graphics editors to create web-formatted imagery or design prototypes.

Technologies used to create websites include W3C standards like HTML and CSS, which can be hand-coded or generated by WYSIWYG editing software.

Other tools Web designers might use include mark up validators[7] and other testing tools for usability and accessibility to ensure their websites meet web accessibility guidelines.[8] Marketing and communication design on a website may identify what works for its target market.

This can be an age group or particular strand of culture; thus the designer may understand the trends of its audience.

Designers may also understand the type of website they are designing, meaning, for example, that (B2B) business-to-business website design considerations might differ greatly from a consumer targeted website such as a retail or entertainment website.

Careful consideration might be made to ensure that the aesthetics or overall design of a site do not clash with the clarity and accuracy of the content or the ease of web navigation,[9] especially on a B2B website.

Designers may also consider the reputation of the owner or business the site is representing to make sure they are portrayed favourably.

User understanding of the content of a website often depends on user understanding of how the website works.

This is part of the user experience design.

User experience is related to layout, clear instructions and labeling on a website.

How well a user understands how they can interact on a site may also depend on the interactive design of the site.

If a user perceives the usefulness of the website, they are more likely to continue using it.

Users who are skilled and well versed with website use may find a more distinctive, yet less intuitive or less user-friendly website interface useful nonetheless.

However, users with less experience are less likely to see the advantages or usefulness of a less intuitive website interface.

This drives the trend for a more universal user experience and ease of access to accommodate as many users as possible regardless of user skill.[10] Much of the user experience design and interactive design are considered in the user interface design.

Advanced interactive functions may require plug-ins if not advanced coding language skills.

Choosing whether or not to use interactivity that requires plug-ins is a critical decision in user experience design.

If the plug-in doesn't come pre-installed with most browsers, there's a risk that the user will have neither the know how or the patience to install a plug-in just to access the content.

If the function requires advanced coding language skills, it may be too costly in either time or money to code compared to the amount of enhancement the function will add to the user experience.

There's also a risk that advanced interactivity may be incompatible with older browsers or hardware configurations.

Publishing a function that doesn't work reliably is potentially worse for the user experience than making no attempt.

It depends on the target audience if it's likely to be needed or worth any risks.

Part of the user interface design is affected by the quality of the page layout.

For example, a designer may consider whether the site's page layout should remain consistent on different pages when designing the layout.

Page pixel width may also be considered vital for aligning objects in the layout design.

The most popular fixed-width websites generally have the same set width to match the current most popular browser window, at the current most popular screen resolution, on the current most popular monitor size.

Most pages are also center-aligned for concerns of aesthetics on larger screens.

Fluid layouts increased in popularity around 2000 as an alternative to HTML-table-based layouts and grid-based design in both page layout design principle and in coding technique, but were very slow to be adopted.[note 1] This was due to considerations of screen reading devices and varying windows sizes which designers have no control over.

Accordingly, a design may be broken down into units (sidebars, content blocks, embedded advertising areas, navigation areas) that are sent to the browser and which will be fitted into the display window by the browser, as best it can.

As the browser does recognize the details of the reader's screen (window size, font size relative to window etc.) the browser can make user-specific layout adjustments to fluid layouts, but not fixed-width layouts.

Although such a display may often change the relative position of major content units, sidebars may be displaced below body text rather than to the side of it.

This is a more flexible display than a hard-coded grid-based layout that doesn't fit the device window.

In particular, the relative position of content blocks may change while leaving the content within the block unaffected.

This also minimizes the user's need to horizontally scroll the page.

Responsive Web design is a newer approach, based on CSS3, and a deeper level of per-device specification within the page's style sheet through an enhanced use of the CSS @media rule.

In March 2018 Google announced they would be rolling out mobile-first indexing.[11]Sites using responsive design are well placed to ensure they meet this new approach.

Web designers may choose to limit the variety of website typefaces to only a few which are of a similar style, instead of using a wide range of typefaces or type styles.

Most browsers recognize a specific number of safe fonts, which designers mainly use in order to avoid complications.

Font downloading was later included in the CSS3 fonts module and has since been implemented in Safari 3.1, Opera 10 and Mozilla Firefox 3.5.

This has subsequently increased interest in web typography, as well as the usage of font downloading.

Most site layouts incorporate negative space to break the text up into paragraphs and also avoid center-aligned text.[12] The page layout and user interface may also be affected by the use of motion graphics.

The choice of whether or not to use motion graphics may depend on the target market for the website.

Motion graphics may be expected or at least better received with an entertainment-oriented website.

However, a website target audience with a more serious or formal interest (such as business, community, or government) might find animations unnecessary and distracting if only for entertainment or decoration purposes.

This doesn't mean that more serious content couldn't be enhanced with animated or video presentations that is relevant to the content.

In either case, motion graphic design may make the difference between more effective visuals or distracting visuals.

Motion graphics that are not initiated by the site visitor can produce accessibility issues.

The World Wide Web consortium accessibility standards require that site visitors be able to disable the animations.[13] Website designers may consider it to be good practice to conform to standards.

This is usually done via a description specifying what the element is doing.

Failure to conform to standards may not make a website unusable or error prone, but standards can relate to the correct layout of pages for readability as well making sure coded elements are closed appropriately.

This includes errors in code, more organized layout for code, and making sure IDs and classes are identified properly.

Poorly-coded pages are sometimes colloquially called tag soup.

Validating via W3C[7] can only be done when a correct DOCTYPE declaration is made, which is used to highlight errors in code.

The system identifies the errors and areas that do not conform to Web design standards.

This information can then be corrected by the user.[14] There are two ways websites are generated: statically or dynamically.

A static website stores a unique file for every page of a static website.

Each time that page is requested, the same content is returned.

This content is created once, during the design of the website.

It is usually manually authored, although some sites use an automated creation process, similar to a dynamic website, whose results are stored long-term as completed pages.

These automatically-created static sites became more popular around 2015, with generators such as Jekyll and Adobe Muse.[15] The benefits of a static website are that they were simpler to host, as their server only needed to serve static content, not execute server-side scripts.

This required less server administration and had less chance of exposing security holes.

They could also serve pages more quickly, on low-cost server hardware.

These advantage became less important as cheap web hosting expanded to also offer dynamic features, and virtual servers offered high performance for short intervals at low cost.

Almost all websites have some static content, as supporting assets such as images and style sheets are usually static, even on a website with highly dynamic pages.

Dynamic websites are generated on the fly and use server-side technology to generate webpages.

They typically extract their content from one or more back-end databases: some are database queries across a relational database to query a catalogue or to summarise numeric information, others may use a document database such as MongoDB or NoSQL to store larger units of content, such as blog posts or wiki articles.

In the design process, dynamic pages are often mocked-up or wireframed using static pages.

The skillset needed to develop dynamic web pages is much broader than for a static pages, involving server-side and database coding as well as client-side interface design.

Even medium-sized dynamic projects are thus almost always a team effort.

When dynamic web pages first developed, they were typically coded directly in languages such as Perl, PHP or ASP.

Some of these, notably PHP and ASP, used a 'template' approach where a server-side page resembled the structure of the completed client-side page and data was inserted into places defined by 'tags'.

This was a quicker means of development than coding in a purely procedural coding language such as Perl.

Both of these approaches have now been supplanted for many websites by higher-level application-focused tools such as content management systems.

These build on top of general purpose coding platforms and assume that a website exists to offer content according to one of several well recognised models, such as a time-sequenced blog, a thematic magazine or news site, a wiki or a user forum.

These tools make the implementation of such a site very easy, and a purely organisational and design-based task, without requiring any coding.

Editing the content itself (as well as the template page) can be done both by means of the site itself, and with the use of third-party software.

The ability to edit all pages is provided only to a specific category of users (for example, administrators, or registered users).

In some cases, anonymous users are allowed to edit certain web content, which is less frequent (for example, on forums - adding messages).

An example of a site with an anonymous change is Wikipedia.

Usability experts, including Jakob Nielsen and Kyle Soucy, have often emphasised homepage design for website success and asserted that the homepage is the most important page on a website.[16][17][18][19] However practitioners into the 2000s were starting to find that a growing number of website traffic was bypassing the homepage, going directly to internal content pages through search engines, e-newsletters and RSS feeds.[20] Leading many practitioners to argue that homepages are less important than most people think.[21][22][23][24] Jared Spool argued in 2007 that a site's homepage was actually the least important page on a website.[25] In 2012 and 2013, carousels (also called 'sliders' and 'rotating banners') have become an extremely popular design element on homepages, often used to showcase featured or recent content in a confined space.[26][27] Many practitioners argue that carousels are an ineffective design element and hurt a website's search engine optimisation and usability.[27][28][29] There are two primary jobs involved in creating a website: the Web designer and web developer, who often work closely together on a website.[30] The Web designers are responsible for the visual aspect, which includes the layout, coloring and typography of a web page.

Web designers will also have a working knowledge of markup languages such as HTML and CSS, although the extent of their knowledge will differ from one Web designer to another.

Particularly in smaller organizations, one person will need the necessary skills for designing and programming the full web page, while larger organizations may have a Web designer responsible for the visual aspect alone.[31] Further jobs which may become involved in the creation of a website include:

Transistor

A Transistor is a semiconductor device used to amplify or switch electronic signals and electrical power.

It is composed of semiconductor material usually with at least three terminals for connection to an external circuit.

A voltage or current applied to one pair of the Transistor's terminals controls the current through another pair of terminals.

Because the controlled (output) power can be higher than the controlling (input) power, a Transistor can amplify a signal.

Today, some Transistors are packaged individually, but many more are found embedded in integrated circuits.

Austro-Hungarian physicist Julius Edgar Lilienfeld proposed the concept of a field-effect Transistor in 1926, but it was not possible to actually construct a working device at that time.[1] The first working device to be built was a point-contact Transistor invented in 1947 by American physicists John Bardeen and Walter Brattain while working under William Shockley at Bell Labs.

They shared the 1956 Nobel Prize in Physics for their achievement.[2] The most widely used Transistor is the MOSFET (metal–oxide–semiconductor field-effect Transistor), also known as the MOS Transistor, which was invented by Egyptian engineer Mohamed Atalla with Korean engineer Dawon Kahng at Bell Labs in 1959.[3][4][5] The MOSFET was the first truly compact Transistor that could be miniaturised and mass-produced for a wide range of uses.[6] Transistors revolutionized the field of electronics, and paved the way for smaller and cheaper radios, calculators, and computers, among other things.

The first Transistor and the MOSFET are on the list of IEEE milestones in electronics.[7][8] The MOSFET is the fundamental building block of modern electronic devices, and is ubiquitous in modern electronic systems.[9] An estimated total of 13 sextillion MOSFETs have been manufactured between 1960 and 2018 (at least 99.9% of all Transistors), making the MOSFET the most widely manufactured device in history.[10] Most Transistors are made from very pure silicon, and some from germanium, but certain other semiconductor materials are sometimes used.

A Transistor may have only one kind of charge carrier, in a field-effect Transistor, or may have two kinds of charge carriers in bipolar junction Transistor devices.

Compared with the vacuum tube, Transistors are generally smaller, and require less power to operate.

Certain vacuum tubes have advantages over Transistors at very high operating frequencies or high operating voltages.

Many types of Transistors are made to standardized specifications by multiple manufacturers.

The thermionic triode, a vacuum tube invented in 1907, enabled amplified radio technology and long-distance telephony.

The triode, however, was a fragile device that consumed a substantial amount of power.

In 1909, physicist William Eccles discovered the crystal diode oscillator.[11] Austro-Hungarian physicist Julius Edgar Lilienfeld filed a patent for a field-effect Transistor (FET) in Canada in 1925,[12] which was intended to be a solid-state replacement for the triode.[13][14] Lilienfeld also filed identical patents in the United States in 1926[15] and 1928.[16][17] However, Lilienfeld did not publish any research articles about his devices nor did his patents cite any specific examples of a working prototype.

Because the production of high-quality semiconductor materials was still decades away, Lilienfeld's solid-state amplifier ideas would not have found practical use in the 1920s and 1930s, even if such a device had been built.[18] In 1934, German inventor Oskar Heil patented a similar device in Europe.[19] From November 17, 1947, to December 23, 1947, John Bardeen and Walter Brattain at AT&T's Bell Labs in Murray Hill, New Jersey, performed experiments and observed that when two gold point contacts were applied to a crystal of germanium, a signal was produced with the output power greater than the input.[20] Solid State Physics Group leader William Shockley saw the potential in this, and over the next few months worked to greatly expand the knowledge of semiconductors.

The term Transistor was coined by John R.

Pierce as a contraction of the term transresistance.[21][22][23] According to Lillian Hoddeson and Vicki Daitch, authors of a biography of John Bardeen, Shockley had proposed that Bell Labs' first patent for a Transistor should be based on the field-effect and that he be named as the inventor.

Having unearthed Lilienfeld's patents that went into obscurity years earlier, lawyers at Bell Labs advised against Shockley's proposal because the idea of a field-effect Transistor that used an electric field as a "grid" was not new.

Instead, what Bardeen, Brattain, and Shockley invented in 1947 was the first point-contact Transistor.[18] In acknowledgement of this accomplishment, Shockley, Bardeen, and Brattain were jointly awarded the 1956 Nobel Prize in Physics "for their researches on semiconductors and their discovery of the Transistor effect".[24][25] Shockley's research team initially attempted to build a field-effect Transistor (FET), by trying to modulate the conductivity of a semiconductor, but was unsuccessful, mainly due to problems with the surface states, the dangling bond, and the germanium and copper compound materials.

In the course of trying to understand the mysterious reasons behind their failure to build a working FET, this led them instead to invent the bipolar point-contact and junction Transistors.[26][27] In 1948, the point-contact Transistor was independently invented by German physicists Herbert Mataré and Heinrich Welker while working at the Compagnie des Freins et Signaux, a Westinghouse subsidiary located in Paris.

Mataré had previous experience in developing crystal rectifiers from silicon and germanium in the German radar effort during World War II.

Using this knowledge, he began researching the phenomenon of "interference" in 1947.

By June 1948, witnessing currents flowing through point-contacts, Mataré produced consistent results using samples of germanium produced by Welker, similar to what Bardeen and Brattain had accomplished earlier in December 1947.

Realizing that Bell Labs' scientists had already invented the Transistor before them, the company rushed to get its "transistron" into production for amplified use in France's telephone network and filed his first Transistor patent application on August 13, 1948.[28][29][30] The first bipolar junction Transistors were invented by Bell Labs' William Shockley, which applied for patent (2,569,347) on June 26, 1948.

On April 12, 1950, Bell Labs chemists Gordon Teal and Morgan Sparks had successfully produced a working bipolar NPN junction amplifying germanium Transistor.

Bell Labs had announced the discovery of this new "sandwich" Transistor in a press release on July 4, 1951.[31][32] The first high-frequency Transistor was the surface-barrier germanium Transistor developed by Philco in 1953, capable of operating up to 60 MHz.[33] These were made by etching depressions into an N-type germanium base from both sides with jets of Indium(III) sulfate until it was a few ten-thousandths of an inch thick.

Indium electroplated into the depressions formed the collector and emitter.[34][35] The first "prototype" pocket Transistor radio was shown by INTERMETALL (a company founded by Herbert Mataré in 1952) at the Internationale Funkausstellung Düsseldorf between August 29, 1953 and September 6, 1953.[36][37] The first "production" pocket Transistor radio was the Regency TR-1, released in October 1954.[25] Produced as a joint venture between the Regency Division of Industrial Development Engineering Associates, I.D.E.A.

and Texas Instruments of Dallas Texas, the TR-1 was manufactured in Indianapolis, Indiana.

It was a near pocket-sized radio featuring 4 Transistors and one germanium diode.

The industrial design was outsourced to the Chicago firm of Painter, Teague and Petertil.

It was initially released in one of four different colours: black, bone white, red, and gray.

Other colours were to shortly follow.[38][39][40] The first "production" all-Transistor car radio was developed by Chrysler and Philco corporations and it was announced in the April 28, 1955 edition of the Wall Street Journal.

Chrysler had made the all-Transistor car radio, Mopar model 914HR, available as an option starting in fall 1955 for its new line of 1956 Chrysler and Imperial cars which first hit the dealership showroom floors on October 21, 1955.[41][42][43] The Sony TR-63, released in 1957, was the first mass-produced Transistor radio, leading to the mass-market penetration of Transistor radios.[44] The TR-63 went on to sell seven million units worldwide by the mid-1960s.[45] Sony's success with Transistor radios led to Transistors replacing vacuum tubes as the dominant electronic technology in the late 1950s.[46] The first working silicon Transistor was developed at Bell Labs on January 26, 1954 by Morris Tanenbaum.

The first commercial silicon Transistor was produced by Texas Instruments in 1954.

This was the work of Gordon Teal, an expert in growing crystals of high purity, who had previously worked at Bell Labs.[47][48][49] Semiconductor companies initially focused on junction Transistors in the early years of the semiconductor industry.

However, the junction Transistor was a relatively bulky device that was difficult to manufacture on a mass-production basis, which limited it to a number of specialised applications.

Field-effect Transistors (FETs) were theorized as potential alternatives to junction Transistors, but researchers could not get FETs to work properly, largely due to the troublesome surface state barrier that prevented the external electric field from penetrating into the material.[6] In the 1950s, Egyptian engineer Mohamed Atalla investigated the surface properties of silicon semiconductors at Bell Labs, where he proposed a new method of semiconductor device fabrication, coating a silicon wafer with an insulating layer of silicon oxide so that electricity could reliably penetrate to the conducting silicon below, overcoming the surface states that prevented electricity from reaching the semiconducting layer.

This is known as surface passivation, a method that became critical to the semiconductor industry as it later made possible the mass-production of silicon integrated circuits.[50][51] He presented his findings in 1957.[52] Building on his surface passivation method, he developed the metal–oxide–semiconductor (MOS) process.[50] He proposed the MOS process could be used to build the first working silicon FET, which he began working on building with the help of his Korean colleague Dawon Kahng.[50] The metal–oxide–semiconductor field-effect Transistor (MOSFET), also known as the MOS Transistor, was invented by Mohamed Atalla and Dawon Kahng in 1959.[3][4] The MOSFET was the first truly compact Transistor that could be miniaturised and mass-produced for a wide range of uses.[6] With its high scalability,[53] and much lower power consumption and higher density than bipolar junction Transistors,[54] the MOSFET made it possible to build high-density integrated circuits,[5] allowing the integration of more than 10,000 Transistors in a single IC.[55] CMOS (complementary MOS) was invented by Chih-Tang Sah and Frank Wanlass at Fairchild Semiconductor in 1963.[56] The first report of a floating-gate MOSFET was made by Dawon Kahng and Simon Sze in 1967.[57] A double-gate MOSFET was first demonstrated in 1984 by Electrotechnical Laboratory researchers Toshihiro Sekigawa and Yutaka Hayashi.[58][59] FinFET (fin field-effect Transistor), a type of 3D non-planar multi-gate MOSFET, originated from the research of Digh Hisamoto and his team at Hitachi Central Research Laboratory in 1989.[60][61] Transistors are the key active components in practically all modern electronics.

Many thus consider the Transistor to be one of the greatest inventions of the 20th century.[62] The MOSFET (metal–oxide–semiconductor field-effect Transistor), also known as the MOS Transistor, is by far the most widely used Transistor, used in applications ranging from computers and electronics[51] to communications technology such as smartphones.[63] The MOSFET has been considered to be the most important Transistor,[64] possibly the most important invention in electronics,[65] and the birth of modern electronics.[66] The MOS Transistor has been the fundamental building block of modern digital electronics since the late 20th century, paving the way for the digital age.[9] The US Patent and Trademark Office calls it a "groundbreaking invention that transformed life and culture around the world".[63] Its importance in today's society rests on its ability to be mass-produced using a highly automated process (semiconductor device fabrication) that achieves astonishingly low per-Transistor costs.

The invention of the first Transistor at Bell Labs was named an IEEE Milestone in 2009.[67] The list of IEEE Milestones also includes the inventions of the junction Transistor in 1948 and the MOSFET in 1959.[68] Although several companies each produce over a billion individually packaged (known as discrete) MOS Transistors every year,[69] the vast majority of Transistors are now produced in integrated circuits (often shortened to IC, microchips or simply chips), along with diodes, resistors, capacitors and other electronic components, to produce complete electronic circuits.

A logic gate consists of up to about twenty Transistors whereas an advanced microprocessor, as of 2009, can use as many as 3 billion Transistors (MOSFETs).[70] "About 60 million Transistors were built in 2002… for [each] man, woman, and child on Earth."[71] The MOS Transistor is the most widely manufactured device in history.[10] As of 2013, billions of Transistors are manufactured every day, nearly all of which are MOSFET devices.[5] Between 1960 and 2018, an estimated total of 13 sextillion MOS Transistors have been manufactured, accounting for at least 99.9% of all Transistors.[10] The Transistor's low cost, flexibility, and reliability have made it a ubiquitous device.

Transistorized mechatronic circuits have replaced electromechanical devices in controlling appliances and machinery.

It is often easier and cheaper to use a standard microcontroller and write a computer program to carry out a control function than to design an equivalent mechanical system to control that same function.

The essential usefulness of a Transistor comes from its ability to use a small signal applied between one pair of its terminals to control a much larger signal at another pair of terminals.

This property is called gain.

It can produce a stronger output signal, a voltage or current, which is proportional to a weaker input signal and thus, it can act as an amplifier.

Alternatively, the Transistor can be used to turn current on or off in a circuit as an electrically controlled switch, where the amount of current is determined by other circuit elements.[72] There are two types of Transistors, which have slight differences in how they are used in a circuit.

A bipolar Transistor has terminals labeled base, collector, and emitter.

A small current at the base terminal (that is, flowing between the base and the emitter) can control or switch a much larger current between the collector and emitter terminals.

For a field-effect Transistor, the terminals are labeled gate, source, and drain, and a voltage at the gate can control a current between source and drain.[73] The image represents a typical bipolar Transistor in a circuit.

Charge will flow between emitter and collector terminals depending on the current in the base.

Because internally the base and emitter connections behave like a semiconductor diode, a voltage drop develops between base and emitter while the base current exists.

The amount of this voltage depends on the material the Transistor is made from, and is referred to as VBE.[73] Transistors are commonly used in digital circuits as electronic switches which can be either in an "on" or "off" state, both for high-power applications such as switched-mode power supplies and for low-power applications such as logic gates.

Important parameters for this application include the current switched, the voltage handled, and the switching speed, characterised by the rise and fall times.[73] In a grounded-emitter Transistor circuit, such as the light-switch circuit shown, as the base voltage rises, the emitter and collector currents rise exponentially.

The collector voltage drops because of reduced resistance from collector to emitter.

If the voltage difference between the collector and emitter were zero (or near zero), the collector current would be limited only by the load resistance (light bulb) and the supply voltage.

This is called saturation because current is flowing from collector to emitter freely.

When saturated, the switch is said to be on.[74] Providing sufficient base drive current is a key problem in the use of bipolar Transistors as switches.

The Transistor provides current gain, allowing a relatively large current in the collector to be switched by a much smaller current into the base terminal.

The ratio of these currents varies depending on the type of Transistor, and even for a particular type, varies depending on the collector current.

In the example light-switch circuit shown, the resistor is chosen to provide enough base current to ensure the Transistor will be saturated.[73] In a switching circuit, the idea is to simulate, as near as possible, the ideal switch having the properties of open circuit when off, short circuit when on, and an instantaneous transition between the two states.

Parameters are chosen such that the "off" output is limited to leakage currents too small to affect connected circuitry, the resistance of the Transistor in the "on" state is too small to affect circuitry, and the transition between the two states is fast enough not to have a detrimental effect.[73] The common-emitter amplifier is designed so that a small change in voltage (Vin) changes the small current through the base of the Transistor whose current amplification combined with the properties of the circuit means that small swings in Vin produce large changes in Vout.[73] Various configurations of single Transistor amplifier are possible, with some providing current gain, some voltage gain, and some both.

From mobile phones to televisions, vast numbers of products include amplifiers for sound reproduction, radio transmission, and signal processing.

The first discrete-Transistor audio amplifiers barely supplied a few hundred milliwatts, but power and audio fidelity gradually increased as better Transistors became available and amplifier architecture evolved.[73] Modern Transistor audio amplifiers of up to a few hundred watts are common and relatively inexpensive.

Before Transistors were developed, vacuum (electron) tubes (or in the UK "thermionic valves" or just "valves") were the main active components in electronic equipment.

The key advantages that have allowed Transistors to replace vacuum tubes in most applications are Transistors have the following limitations: Transistors are categorized by Hence, a particular Transistor may be described as silicon, surface-mount, BJT, n–p–n, low-power, high-frequency switch.

A popular way to remember which symbol represents which type of Transistor is to look at the arrow and how it is arranged.

Within an NPN Transistor symbol, the arrow will Not Point iN.

Conversely, within the PNP symbol you see that the arrow Points iN Proudly.

The field-effect Transistor, sometimes called a unipolar Transistor, uses either electrons (in n-channel FET) or holes (in p-channel FET) for conduction.

The four terminals of the FET are named source, gate, drain, and body (substrate).

On most FETs, the body is connected to the source inside the package, and this will be assumed for the following description.

In a FET, the drain-to-source current flows via a conducting channel that connects the source region to the drain region.

The conductivity is varied by the electric field that is produced when a voltage is applied between the gate and source terminals, hence the current flowing between the drain and source is controlled by the voltage applied between the gate and source.

As the gate–source voltage (VGS) is increased, the drain–source current (IDS) increases exponentially for VGS below threshold, and then at a roughly quadratic rate (IDS ∝ (VGS − VT)2) (where VT is the threshold voltage at which drain current begins)[78] in the "space-charge-limited" region above threshold.

A quadratic behavior is not observed in modern devices, for example, at the 65 nm technology node.[79] For low noise at narrow bandwidth the higher input resistance of the FET is advantageous.

FETs are divided into two families: junction FET (JFET) and insulated gate FET (IGFET).

The IGFET is more commonly known as a metal–oxide–semiconductor FET (MOSFET), reflecting its original construction from layers of metal (the gate), oxide (the insulation), and semiconductor.

Unlike IGFETs, the JFET gate forms a p–n diode with the channel which lies between the source and drain.

Functionally, this makes the n-channel JFET the solid-state equivalent of the vacuum tube triode which, similarly, forms a diode between its grid and cathode.

Also, both devices operate in the depletion mode, they both have a high input impedance, and they both conduct current under the control of an input voltage.

Metal–semiconductor FETs (MESFETs) are JFETs in which the reverse biased p–n junction is replaced by a metal–semiconductor junction.

These, and the HEMTs (high-electron-mobility Transistors, or HFETs), in which a two-dimensional electron gas with very high carrier mobility is used for charge transport, are especially suitable for use at very high frequencies (several GHz).

FETs are further divided into depletion-mode and enhancement-mode types, depending on whether the channel is turned on or off with zero gate-to-source voltage.

For enhancement mode, the channel is off at zero bias, and a gate potential can "enhance" the conduction.

For the depletion mode, the channel is on at zero bias, and a gate potential (of the opposite polarity) can "deplete" the channel, reducing conduction.

For either mode, a more positive gate voltage corresponds to a higher current for n-channel devices and a lower current for p-channel devices.

Nearly all JFETs are depletion-mode because the diode junctions would forward bias and conduct if they were enhancement-mode devices, while most IGFETs are enhancement-mode types.

The metal–oxide–semiconductor field-effect Transistor (MOSFET, MOS-FET, or MOS FET), also known as the metal–oxide–silicon Transistor (MOS Transistor, or MOS),[5] is a type of field-effect Transistor that is fabricated by the controlled oxidation of a semiconductor, typically silicon.

It has an insulated gate, whose voltage determines the conductivity of the device.

This ability to change conductivity with the amount of applied voltage can be used for amplifying or switching electronic signals.

The MOSFET is by far the most common Transistor, and the basic building block of most modern electronics.[9] The MOSFET accounts for 99.9% of all Transistors in the world.[10] Bipolar Transistors are so named because they conduct by using both majority and minority carriers.

The bipolar junction Transistor, the first type of Transistor to be mass-produced, is a combination of two junction diodes, and is formed of either a thin layer of p-type semiconductor sandwiched between two n-type semiconductors (an n–p–n Transistor), or a thin layer of n-type semiconductor sandwiched between two p-type semiconductors (a p–n–p Transistor).

This construction produces two p–n junctions: a base–emitter junction and a base–collector junction, separated by a thin region of semiconductor known as the base region.

(Two junction diodes wired together without sharing an intervening semiconducting region will not make a Transistor).

BJTs have three terminals, corresponding to the three layers of semiconductor—an emitter, a base, and a collector.

They are useful in amplifiers because the currents at the emitter and collector are controllable by a relatively small base current.[80] In an n–p–n Transistor operating in the active region, the emitter–base junction is forward biased (electrons and holes recombine at the junction), and the base-collector junction is reverse biased (electrons and holes are formed at, and move away from the junction), and electrons are injected into the base region.

Because the base is narrow, most of these electrons will diffuse into the reverse-biased base–collector junction and be swept into the collector; perhaps one-hundredth of the electrons will recombine in the base, which is the dominant mechanism in the base current.

As well, as the base is lightly doped (in comparison to the emitter and collector regions), recombination rates are low, permitting more carriers to diffuse across the base region.

By controlling the number of electrons that can leave the base, the number of electrons entering the collector can be controlled.[80] Collector current is approximately β (common-emitter current gain) times the base current.

It is typically greater than 100 for small-signal Transistors but can be smaller in Transistors designed for high-power applications.

Unlike the field-effect Transistor (see below), the BJT is a low-input-impedance device.

Also, as the base–emitter voltage (VBE) is increased the base–emitter current and hence the collector–emitter current (ICE) increase exponentially according to the Shockley diode model and the Ebers-Moll model.

Because of this exponential relationship, the BJT has a higher transconductance than the FET.

Bipolar Transistors can be made to conduct by exposure to light, because absorption of photons in the base region generates a photocurrent that acts as a base current; the collector current is approximately β times the photocurrent.

Devices designed for this purpose have a transparent window in the package and are called photoTransistors.

The MOSFET is by far the most widely used Transistor for both digital circuits as well as analog circuits,[81] accounting for 99.9% of all Transistors in the world.[10] The bipolar junction Transistor (BJT) was previously the most commonly used Transistor during the 1950s to 1960s.

Even after MOSFETs became widely available in the 1970s, the BJT remained the Transistor of choice for many analog circuits such as amplifiers because of their greater linearity, up until MOSFET devices (such as power MOSFETs, LDMOS and RF CMOS) replaced them for most power electronic applications in the 1980s.

In integrated circuits, the desirable properties of MOSFETs allowed them to capture nearly all market share for digital circuits in the 1970s.

Discrete MOSFETs (typically power MOSFETs) can be applied in Transistor applications, including analog circuits, voltage regulators, amplifiers, power transmitters and motor drivers.

The types of some Transistors can be parsed from the part number.

There are three major semiconductor naming standards.

In each, the alphanumeric prefix provides clues to type of the device.

The JIS-C-7012 specification for Transistor part numbers starts with "2S",[89] e.g.

2SD965, but sometimes the "2S" prefix is not marked on the package – a 2SD965 might only be marked "D965"; a 2SC1815 might be listed by a supplier as simply "C1815".

This series sometimes has suffixes (such as "R", "O", "BL", standing for "red", "orange", "blue", etc.) to denote variants, such as tighter hFE (gain) groupings.

The Pro Electron standard, the European Electronic Component Manufacturers Association part numbering scheme, begins with two letters: the first gives the semiconductor type (A for germanium, B for silicon, and C for materials like GaAs); the second letter denotes the intended use (A for diode, C for general-purpose Transistor, etc.).

A 3-digit sequence number (or one letter then two digits, for industrial types) follows.

With early devices this indicated the case type.

Suffixes may be used, with a letter (e.g.

"C" often means high hFE, such as in: BC549C[90]) or other codes may follow to show gain (e.g.

BC327-25) or voltage rating (e.g.

BUK854-800A[91]).

The more common prefixes are: The JEDEC EIA370 Transistor device numbers usually start with "2N", indicating a three-terminal device (dual-gate field-effect Transistors are four-terminal devices, so begin with 3N), then a 2, 3 or 4-digit sequential number with no significance as to device properties (although early devices with low numbers tend to be germanium).

For example, 2N3055 is a silicon n–p–n power Transistor, 2N1301 is a p–n–p germanium switching Transistor.

A letter suffix (such as "A") is sometimes used to indicate a newer variant, but rarely gain groupings.

Manufacturers of devices may have their own proprietary numbering system, for example CK722.

Since devices are second-sourced, a manufacturer's prefix (like "MPF" in MPF102, which originally would denote a Motorola FET) now is an unreliable indicator of who made the device.

Some proprietary naming schemes adopt parts of other naming schemes, for example a PN2222A is a (possibly Fairchild Semiconductor) 2N2222A in a plastic case (but a PN108 is a plastic version of a BC108, not a 2N108, while the PN100 is unrelated to other xx100 devices).

Military part numbers sometimes are assigned their own codes, such as the British Military CV Naming System.

Manufacturers buying large numbers of similar parts may have them supplied with "house numbers", identifying a particular purchasing specification and not necessarily a device with a standardized registered number.

For example, an HP part 1854,0053 is a (JEDEC) 2N2218 Transistor[92][93] which is also assigned the CV number: CV7763[94] With so many independent naming schemes, and the abbreviation of part numbers when printed on the devices, ambiguity sometimes occurs.

For example, two different devices may be marked "J176" (one the J176 low-power JFET, the other the higher-powered MOSFET 2SJ176).

As older "through-hole" Transistors are given surface-mount packaged counterparts, they tend to be assigned many different part numbers because manufacturers have their own systems to cope with the variety in pinout arrangements and options for dual or matched n–p–n + p–n–p devices in one pack.

So even when the original device (such as a 2N3904) may have been assigned by a standards authority, and well known by engineers over the years, the new versions are far from standardized in their naming.

The first BJTs were made from germanium (Ge).

Silicon (Si) types currently predominate but certain advanced microwave and high-performance versions now employ the compound semiconductor material gallium arsenide (GaAs) and the semiconductor alloy silicon germanium (SiGe).

Single element semiconductor material (Ge and Si) is described as elemental.

Rough parameters for the most common semiconductor materials used to make Transistors are given in the adjacent table.

These parameters will vary with increase in temperature, electric field, impurity level, strain, and sundry other factors.

The junction forward voltage is the voltage applied to the emitter–base junction of a BJT in order to make the base conduct a specified current.

The current increases exponentially as the junction forward voltage is increased.

The values given in the table are typical for a current of 1 mA (the same values apply to semiconductor diodes).

The lower the junction forward voltage the better, as this means that less power is required to "drive" the Transistor.

The junction forward voltage for a given current decreases with increase in temperature.

For a typical silicon junction the change is −2.1 mV/°C.[95] In some circuits special compensating elements (sensistors) must be used to compensate for such changes.

The density of mobile carriers in the channel of a MOSFET is a function of the electric field forming the channel and of various other phenomena such as the impurity level in the channel.

Some impurities, called dopants, are introduced deliberately in making a MOSFET, to control the MOSFET electrical behavior.

The electron mobility and hole mobility columns show the average speed that electrons and holes diffuse through the semiconductor material with an electric field of 1 volt per meter applied across the material.

In general, the higher the electron mobility the faster the Transistor can operate.

The table indicates that Ge is a better material than Si in this respect.

However, Ge has four major shortcomings compared to silicon and gallium arsenide: Because the electron mobility is higher than the hole mobility for all semiconductor materials, a given bipolar n–p–n Transistor tends to be swifter than an equivalent p–n–p Transistor.

GaAs has the highest electron mobility of the three semiconductors.

It is for this reason that GaAs is used in high-frequency applications.

A relatively recent[when?] FET development, the high-electron-mobility Transistor (HEMT), has a heterostructure (junction between different semiconductor materials) of aluminium gallium arsenide (AlGaAs)-gallium arsenide (GaAs) which has twice the electron mobility of a GaAs-metal barrier junction.

Because of their high speed and low noise, HEMTs are used in satellite receivers working at frequencies around 12 GHz.

HEMTs based on gallium nitride and aluminium gallium nitride (AlGaN/GaN HEMTs) provide a still higher electron mobility and are being developed for various applications.

'Max.

junction temperature' values represent a cross section taken from various manufacturers' data sheets.

This temperature should not be exceeded or the Transistor may be damaged.

'Al–Si junction' refers to the high-speed (aluminum–silicon) metal–semiconductor barrier diode, commonly known as a Schottky diode.

This is included in the table because some silicon power IGFETs have a parasitic reverse Schottky diode formed between the source and drain as part of the fabrication process.

This diode can be a nuisance, but sometimes it is used in the circuit.

Discrete Transistors can be individually packaged Transistors or unpackaged Transistor chips (dice).

Transistors come in many different semiconductor packages (see image).

The two main categories are through-hole (or leaded), and surface-mount, also known as surface-mount device (SMD).

The ball grid array (BGA) is the latest surface-mount package (currently only for large integrated circuits).

It has solder "balls" on the underside in place of leads.

Because they are smaller and have shorter interconnections, SMDs have better high-frequency characteristics but lower power rating.

Transistor packages are made of glass, metal, ceramic, or plastic.

The package often dictates the power rating and frequency characteristics.

Power Transistors have larger packages that can be clamped to heat sinks for enhanced cooling.

Additionally, most power Transistors have the collector or drain physically connected to the metal enclosure.

At the other extreme, some surface-mount microwave Transistors are as small as grains of sand.

Often a given Transistor type is available in several packages.

Transistor packages are mainly standardized, but the assignment of a Transistor's functions to the terminals is not: other Transistor types can assign other functions to the package's terminals.

Even for the same Transistor type the terminal assignment can vary (normally indicated by a suffix letter to the part number, q.e.

BC212L and BC212K).

Nowadays most Transistors come in a wide range of SMT packages, in comparison the list of available through-hole packages is relatively small, here is a short list of the most common through-hole Transistors packages in alphabetical order: ATV, E-line, MRT, HRT, SC-43, SC-72, TO-3, TO-18, TO-39, TO-92, TO-126, TO220, TO247, TO251, TO262, ZTX851.

Unpackaged Transistor chips (die) may be assembled into hybrid devices.[96] The IBM SLT module of the 1960s is one example of such a hybrid circuit module using glass passivated Transistor (and diode) die.

Other packaging techniques for discrete Transistors as chips include Direct Chip Attach (DCA) and Chip On Board (COB).[96] Researchers have made several kinds of flexible Transistors, including organic field-effect Transistors.[97][98][99] Flexible Transistors are useful in some kinds of flexible displays and other flexible electronics.

Chevrolet Malibu

The Chevrolet Malibu is a mid-size car manufactured and marketed by Chevrolet from 1964 to 1983 and since 1997.

The Malibu began as a trim-level of the Chevrolet Chevelle, becoming its own model line in 1978.

Originally a rear-wheel-drive intermediate, GM revived the Malibu nameplate as a front-wheel-drive car in 1997.

Named after coastal community of Malibu, California, the Malibu was marketed primarily in North America, with the eighth generation introduced globally.

With the discontinuation of the Chevrolet Impala in March 2020, the Malibu and the Sonic currently are the only sedans offered in the Chevrolet division in the U.S.A.

The first Malibu was a top-line subseries of the mid-sized Chevrolet Chevelle from 1964 to 1972.

Malibus were generally available in a full range of bodystyles including a four-door sedan, two-door Sport Coupe hardtop, convertible and two-seat station wagon.

Interiors were more lavish than lesser Chevelle 300 and 300 Deluxe models thanks to patterned cloth and vinyl upholstery (all-vinyl in convertibles and station wagons), deep-twist carpeting, deluxe steering wheel and other items.

The Malibu SS was available only as a two-door Sport Coupe hardtop or convertible and added bucket seats, center console (with optional four-speed manual or Powerglide transmissions), engine gauges and special wheelcovers, and offered with any six-cylinder or V8 engine offered in other Chevelles - with the top option being a 300 hp (224 kW; 304 PS) 327 cu in (5.4 L) in 1964.

For 1965, Malibus and other Chevelles received new grilles and revised tail sections and had the exhaust pipes replaced but carried over the same basic styling and bodystyles from 1964.

The Malibu and Malibu SS models continued as before with the SS featuring a blacked-out grille and special wheelcovers.

Top engine option was now a 350 hp (261 kW; 355 PS) 327 cu in (5.4 L) V8.

The Malibu SS was replaced in 1966 by a new Chevelle SS-396 series that included a big-block 396 cu in (6.5 L) V8 engine (Canadian market did not receive the SS396 but marketed the former Malibu SS nameplate until January 1967 when it was phased out), heavy duty suspension and other performance equipment.

Other SS-396 equipment was similar to Malibu Sport Coupes and convertibles including an all-vinyl bench seat.

Bucket seats and console with floor shift were now optional on the SS and for 1966 with the SS now denoting a car with a big-block engine, the bucket seats became a new option on the regular Malibu Sport Coupe and convertible, upon which any six-cylinder or small-block V8 could be ordered.

Also new for 1966 was the Chevelle Malibu four-door Sport Sedan hardtop.

Styling revisions on all 1966 Chevelles including more rounded styling similar to the full-sized Chevrolets with sail panels and tunneled rear windows featured on two-door hardtop coupes.

For 1967, the same assortment of bodystyles were continued with styling changes similar to all other Chevelles including a new grille and revised tail section with taillights that wrapped around to the side.

New this year was a Chevelle Malibu Concours station wagon with simulated woodgrain exterior side panel trim.

Front disc brakes were a new option along with a stereo 8-track tape player.

The same assortment of drivetrains carried over from 1966 with the top 327 cu in (5.4 L) V8 dropped from 350 to 325 hp (261 to 242 kW; 355 to 330 PS).

Malibus and all other Chevelles were completely restyled for 1968 with semi-fastback rooflines on two-door hardtops and wheelbases split to 112 inches (2,800 mm)} on two-door models and 116 for four-door sedans and station wagons.

Engine offerings included a new 307 cu in (5.0 L) V8 rated at 200 hp (149 kW; 203 PS) that replaced the 283 cu in (4.6 L) V8 that had served as the base V8 since the Chevelle's introduction in 1964.

Inside was a new instrument panel featuring round gauges in square pods similar to what would appear in Camaros the following year.

New for 1968 was the Concours luxury option for Malibu sedans and coupes that included upgraded cloth or vinyl bench seats, carpeted lower door panels, woodgrain trim on dash and door panels, a center console and floor shifter (only with the hardtop and convertible, which was shared with the SS396) and Concours nameplates.

There was again a top-line Concours Estate wagon with simulated woodgrain trim that had the same interior and exterior appointments as the Malibu sedans.

New grilles and rear decks with revised taillights highlighted the 1969 Malibus and other Chevelles.

Instrument panels were revised and front seat headrests were now standard equipment due to federal safety mandate.

The ignition switch moved from the instrument panel to the steering column and also doubled as a steering wheel lock.

The 307 continued as the base V8, but the 327 engines were replaced by new 350 cu in (5.7 L) V8s of 255 and 300 hp (190 and 224 kW; 259 and 304 PS).

GM's three-speed Turbo Hydra-Matic transmission, previously only offered on SS-396 Chevelles (RPO M40), was now available on all models with all engines (THM400s were used with the 396 while the THM350 (RPO M38) first introduced with the Camaro and Nova) was phased in with the small blocks optioned, including the six-cylinder and small-block V8s which in previous years were only available with the two-speed Powerglide.

A police package Chevelle 300 (pillared 4 door sedan) was available for the 1969 model year which came with the L35 code 396 - it was built in few numbers when the Chrysler Corporation held the market for its law enforcement orders.

Some 1964 and 1965 Chevelle 300s came with the BO7 police package but was powered with the inline six.

For 1970, the Malibu was initially the only series of Chevelle offered, aside from the SS-396 and new SS-454, as the low-line 300 and 300 Deluxe models were discontinued for the American market (it continued in Canada until 1972), which also eliminated the two-door pillared coupes from the Chevelle lineup – which were never included in the Malibu series.

New grilles, rear decks with taillights moved into the bumper and revised Sport Coupe roofline highlighted this year's changes.

The standard six-cylinder engine was punched up from 230 cu in (3.8 L) to 250 cu in (4.1 L) and 155 hp (116 kW; 157 PS), while the same assortment of V8s carried over with the addition of a 330 hp (246 kW; 335 PS), 400 cu in (6.6 L) V8 on non-SS Chevelles.

At mid-year, the Malibu was rejoined by lower-line Chevelle models that were simply called the base Chevelle in both four-door sedans and two-door hardtops.

In 1971, Malibus and all other Chevelles got a new grille surrounded by single headlamps replacing the duals of previous years, and four round taillights similar to Camaros and Corvettes were located in the bumper.

All engines were detuned to use lower-octane unleaded gasoline this year per GM corporate policy as a first step toward the catalytic converter-equipped cars planned for 1975 and later models which would require no-lead fuel.

Only new grilles highlighted the 1972 Malibu and other Chevelles.

All bodystyles were carried over from 1971, but 1972 would be the final year for hardtops and convertibles as the redesigned Chevelles originally planned for this year, but delayed until 1973, would feature Colonnade styling with side pillars and frameless door windows.

The 1972 Chevelle was also ordered with the police package which used RPO 9C1 (which became the default SEO (service option) code for subsequent Chevrolet PPV packages).

The Chevelle was redesigned for the 1973 model year.

Models included the base Deluxe, mid-range Malibu & Malibu SS and the top-line Laguna.

For 1974, the Deluxe was dropped, and the Malibu became the entry-level Chevelle.

The Laguna trim package was replaced with the Malibu Classic which used a stacked arrangement of four rectangular headlights and made its way to the dealers in the 1976 model year, offering the Chevrolet built inline six 250 CID as the base engine.

The Laguna S-3 model was introduced to replace the SS, and continued through 1976.

For the 1978 model year, the Malibu name, which had been the bestselling badge in the lineup, replaced the Chevelle name.

This was Chevrolet's second downsized nameplate, following the lead of the 1977 Chevrolet Caprice.

The new, more efficient platform was over a foot shorter and had shed 500 to 1,000 pounds (230 to 450 kg) compared to previous versions, yet offered increased trunk space, leg room, and head room.[1] Only two trim levels were offered - Malibu and Malibu Classic.

The Malibu Classic Landau series had a two-tone paint job on the upper and lower body sections, and a vinyl top.

This generation introduced the Chevrolet 90° V6 family of engines, with the 200 CID (3.3 L) V6 as the base engine for the all new 1978 Chevrolet Malibu, along with the 229 CID (3.8 L) V6 and the 305 CID (5.0 L) Chevy built V8 as options.

The 200 and 229 engines were essentially a small block V-8, with one pair of cylinders lopped off.

The front and rear bellhousing face were the same as the small V8.

The 231 engine was a Buick product, and featured a front distributor.

Three bodystyles were produced (station wagon, sedan, and coupe), and the design was also used as the basis for the El Camino pickup truck with its own chassis.

The sedan initially had a conservative six-window notchback roofline.

This was in contrast to the unusual fastback rooflines adopted by Oldsmobile and Buick divisions which would later revert a more formal pillar style.

To increase rear seat hip room (and encourage more orders for the high-profit air conditioner), the windows in the rear doors of four-door sedans were fixed, while the wagons had small moveable vents.

With the rear window regulators no longer required, Chevrolet was able to recess the door arm rests into the door cavity, resulting in a few extra inches of rear seat room.

Customers complained about the lack of rear seat ventilation.

No doubt this design contributed to the number of factory air conditioning units sold with the cars, to the benefit of General Motors and Chevrolet dealers.

For the 1981 model year, sedans adopted a four-window profile and "formal" pillared upright roofline.

The two-door coupe was last produced in this year, as the Monte Carlo assumed the market position held by the 2-door coupe.

For 1982 the Malibu was facelifted with more squared-off front styling marked by quad headlights with long, thin turn signals beneath them.

The look was very reminiscent of the also recently facelifted Chevrolet Caprice.

For 1983, Malibus gained a block-style "Malibu" badge on the front fenders to replace the cursive-style script located on the rear quarter panels of previous model years.

The four-door Malibu was commonly used in fleet service, especially for law enforcement.

After the Chevrolet Nova ceased production in 1979, the mid-sized 9C1 police version (not to be confused with the full-size Chevrolet Impala 9C1 which was also available) was transferred to the Malibu, filling a void for the mid-sized police patrol cars.

A 9C1-equipped Malibu with an LT-1 Z-28 Camaro engine driven by E.

Pierce Marshall placed 13th of 47 in the 1979 Cannonball Baker Sea-To-Shining-Sea Memorial Trophy Dash, better known as the Cannonball Run.[2] There was no factory Malibu SS option available on this generation.

The SS only came in the El Camino.

The rare, and striking, 1980 Malibu M80 was a dealer package for only North and South Carolina.

It was mostly aimed at NASCAR fans who regularly traveled to Darlington Raceway.

To this day, the number actually produced is unknown; estimates place this around 1,901 cars.

All M80s had to be white with dark blue bucket seats and center console interior.

The base of the M80 was a two-door sport coupe equipped with the F41 Sport Suspension package and the normal V8 (140 hp) drive train.

The M80 option added two dark blue skunk stripes on top and a lower door stripe with the M80 identification.

The package also added front and rear spoilers and 1981 steel rally wheels (sourced from the 1980 Monte Carlo).

In Mexico, General Motors produced this generation in the Ramos Arizpe plant, which was sold during three years (1979 to 1981).

Mexican versions came in three trim levels (Chevelle, Malibu and Malibu Classic) and two body styles (sedan and coupe) with the 250 cu in (4.1 L) I6 as basic engine and the 350 cu in (5.7 L) 260 hp (194 kW) V8 as the optional; this engine was standard on Malibu Classic models during those three years.

This was possible because the Mexican emissions regulations remained relatively free at the time.

In 1981, General Motors of Canada in Oshawa produced a special order of 25,500 four-door Malibu sedans for Saddam Hussein's Iraqi government.

The deal was reportedly worth well over $100 million to GMCL.

These special-order Malibus carried the unusual combination of GM's lowest-power carburated V6, the 110 hp (82 kW) 229 cu in (3.8 L) engine mated to three-speed manual transmission with a unique on-the-floor stick shifter.

All of the cars were equipped with air conditioning, heavy duty cooling systems, AM/FM cassette decks, front bench seats, 200 km/h speedometers, tough tweed and vinyl upholstery and 14-inch (360 mm) body-color stamped steel wheels with small "dog dish" hubcaps.

However, only 13,000 units ever made it to Iraq, with the majority of the cars becoming taxis in Baghdad (once the cab-identifying orange paint was added to the front and rear fenders)[citation needed].

With the remaining balance of about 12,500 additional Malibus either sitting on a dock in Halifax or awaiting port shipment in Oshawa, where they were built, the Iraqis suddenly cancelled the order in 1982.[3] Excuses reportedly included various "quality concerns", including the inability of the local drivers to shift the finicky Saginaw manual transmission.

This issue was eventually identified as being due to an apparent clutch release issue that eventually required on-site retrofitting by a crew of Canadian technicians sent to Iraq to support the infamous "Recall in the Desert".

Later speculation was that the Iraqis were actually forced to back out for financial reasons, due to their escalating hostilities with Iran requiring the immediate diversion of funds to support the Iraqi war effort.

Then GM of Canada President Donald Hackworth was initially quoted as stating GMCL intended to still try to sell the Malibus overseas in other Middle East markets; however in the end, the orphaned "Iraqi Taxi" Malibus were all sold to the Canadian public at the greatly reduced price of about C$6,800.

Over the years, they have acquired a low-key 'celebrity' status, sometimes being colloquially referred to as "Iraqibu".[4][5] The Malibu was an extensively used body style in NASCAR competition from 1973 to 1983.

The Laguna S-3 variant, in particular, was successful during the 1975 through 1977 racing seasons, with Cale Yarborough winning 20 races in those years as well as winning the NASCAR championship one year.

Because it was considered a limited edition model, NASCAR declared it ineligible for competition following the 1977 season, even though (given NASCARs three-year eligibility rule) it should have been allowed to run through 1979.

Beginning in 1981, the downsized Malibu body style was eligible to run, but given its boxy shape, only one driver Dave Marcis ran it in 1981 and 1982, with one victory in a rain-shortened Richmond 400 at Richmond in 1982, the independent driver's last win.

1979 Chevrolet Malibu sedan Chevrolet Malibu coupe 1978 Chevrolet Malibu coupe 1982 Chevrolet Malibu wagon Chevrolet Malibu Classic made and sold in Mexico The base 200 cu in (3.3 L) V-6 engine for the 1978 Chevrolet Malibu developed just 95 hp (71 kW; 96 PS) with optional upgrade to a 105 hp (78 kW; 106 PS) V-6, or 145 hp (108 kW; 147 PS) V-8.

The largest and most powerful option was the 165–170 hp (123–127 kW) 350 cu in (5.7 L) V-8.

Year Model Available Engines 78 = 200 V6 (95 hp), 231 (3.8 L) V6 (105 hp), 305 V8 (140 hp), 350 V8 (165 hp) 79 = 200 V6 (95 hp), 231 (3.8 L) V6 (115 hp), 267 V8 (125 hp), 305 V8 (140 hp), 350 V8 (165 hp) 80 = 229 V6 (110 hp), 231 (3.8 L) V6 (110 hp), 267 V8 (115 hp), 305 V8 (140 hp), 350 V8 (170 hp) 81 = 229 V6 (110 hp), 231 (3.8 L) V6 (110 hp), 267 V8 (115 hp), 305 V8 (140 hp), 350 V8 (170 hp) 82 = 229 V6 (110 hp), 231 (3.8 L) V6 (110 hp), 4.3 L V6 Diesel (85 hp), 305 V8, 350 V8 Diesel (105 hp) 83 = 229 V6 (110 hp), 231 (3.8 L) V6 (110 hp), 4.3 L V6 Diesel (85 hp), 305 V8, 350 V8 Diesel (105 hp) Beginning in 1982, the Malibu shared GM's redesignated rear-wheel-drive G platform with cars like the Pontiac Grand Prix, Oldsmobile Cutlass Supreme and Buick Regal.

The Malibu Classic was last marketed in 1982; Malibus were produced as four-door sedans and as station wagons until 1983, at which time it was fully replaced by the front-wheel-drive Chevrolet Celebrity.

Although the sedan and wagon were phased out, the El Camino utility, which shared styling with the Malibu, remained in production until 1987.

A new front-wheel drive Malibu was introduced for the 1997 model year on an extended wheelbase version of the GM N platform shared with the Buick Skylark, Oldsmobile Achieva, Oldsmobile Alero, and Pontiac Grand Am as a competitor to the perennial stalwarts the Honda Accord and Toyota Camry which were the best sellers in the midsize market.

GM phased out the L platform Corsica/Beretta where a new midsize was introduced in the wake of GM phasing out its B platform (at this time the Chevrolet Lumina was the largest car).

All N-body Malibus were produced at the Oklahoma City Assembly plant (after 2002 it was retooled to build the GMT360 SUVs) and the Wilmington Assembly plant (1997 to 1999), before moving production to Lansing, Michigan.

The Wilmington plant was retooled to build the Saturn L-Series in 1999.

The Oldsmobile Cutlass was a rebadged, slightly more upscale version of the Malibu, produced from October 1996 – 1999.[8] It was intended as a placeholder model to fill the gap left by the discontinuation of the aging Oldsmobile Cutlass Ciera, before the all-new Alero arrived in 1999.

The Malibu itself replaced the compact Chevrolet Corsica.

Power came from a 2.4 L 150 hp (112 kW) I4 or 3.1 L 155 hp (116 kW) V6.

The Malibu was Motor Trend magazine's Car of the Year for 1997; this was later criticized by Car and Driver in 2009, citing that the Malibu was insufficiently distinguishable in terms of performance or interior quality to warrant such praise in hindsight.[9] Standard features included four-wheel ABS brakes, hydraulic engine mounts and air conditioning.[10] The 1997 to 1999 Malibus had a front grille with the Malibu logo in silver in the center; 2000 to 2003 models, and the Classic, had the blue Chevrolet emblem on the front grille.

The 1997 to 2003 LS models were sometimes equipped with special gold-colored badges (the rear Malibu lettering and logo).

When a new Malibu was introduced on the Epsilon platform for 2004, the N-body Malibu was renamed Chevrolet Classic and remained in production for the 2004 and 2005 model years, being restricted to rental car companies and fleet orders with production ending in April 2005.

The 3.1 L V6 was updated in 2000 with a new power rating of 170 hp (127 kW), and the 2.4 L 4-cylinder was dropped after that year.

However, a 4-cylinder was reintroduced in 2004 when the 2.2 L Ecotec was offered on the Classic.

U.S.

Environmental Protection Agency fuel mileage estimates for the 2.2 L Ecotec engine are 21 mpg‑US (11 L/100 km; 25 mpg‑imp)-31 mpg‑US (7.6 L/100 km; 37 mpg‑imp).

The February 2002 issue of HCI: Hot Compact & Imports magazine featured the Chevrolet Malibu Cruiser concept that GM Performance Division built for the SEMA show in 2001.

The car was painted in "Sublime Lime" by BASF[11] and featured a highly modified turbocharged 3500 SFI 60-degree V6 (producing 230 hp (172 kW) at 5,000 rpm and 280 lb⋅ft (380 N⋅m) of torque at 2,900 rpm), a 4T65-E four-speed transmission with overdrive, a set of 19x8-inch wheels by Evo wrapped in Toyo Proxes T1-S high-performance tires.

Numerous interior modifications included a full-length custom center console, four black leather Sparco racing seats, and a Kenwood entertainment center (with radio, CD, DVD, TV, 10-disc changer and numerous amps and speakers).

Exterior modifications included custom HID headlamps (both low and high beams), "Altezza" style taillights, and a custom bodykit.[12] Chevrolet produced the Cruiser as a concept, and it was therefore never available for purchase.

Their intent was to attract younger buyers to the stock model and demonstrate that aftermarket modifications could be made.[citation needed] The Malibu name was moved to GM's new Epsilon platform based on the 2002 Opel Vectra C for 2004.

The Epsilon-based Malibu came in two bodystyles, a standard 4-door sedan and a 5-door Malibu Maxx hatchback (the first mid-size Chevrolet hatchback since the 1989 Chevrolet Corsica).[15][16] The Malibu Maxx has a fixed glass roof panel over the rear seats with a retractable sunshade, and an optional glass panel sunroof over the front seats and was similar in execution to the Opel Signum, a large hatchback derived from the Vectra C.

Base power for the sedan came from a 2.2 L Ecotec L61 I4 producing 144 hp (108 kW).

LS and LT trim sedans and Maxx models originally came with a 3.5 L 201 hp (149 kW) High Value LX9 V6.

The SS sedan and Maxx models were powered by the 3.9 L 240 hp (179 kW) High Value LZ9 V6.[17] For 2007, the LX9 was replaced with the LZ4 V6, which in the Malibu produced 217 hp (162 kW).

The L61 Ecotec was also updated for the 2007 model year with many improvements.

A remote starter was also available, which was introduced on several other GM vehicles for 2004.

This generation of the Malibu initially debuted with a front fascia design featuring a wide grille split horizontally by a prominent chrome bar that ran the entire width of the car, which was intended to make it resemble Chevrolet's trucks.

However, for 2006, the front end was updated with more conventional styling: the chrome bar was removed, and the grille itself was made smaller, bearing a resemblance to the grille on the previous Malibu.[18] The car also added GM badges near the front doors.

While the Malibu Maxx was discontinued after the 2007 model year, the Malibu sedan remained in production for the 2008 model year, known as the Malibu Classic.

The cars themselves bear Malibu badges, unlike the past generation Classic.

A special SS trim was available on the Malibu and Malibu Maxx with the 3.9 L LZ9 V6 from 2006 to 2007, developing 240 hp (179 kW) and 240 lb⋅ft (325 N⋅m) and channeled through a 4T65-E four-speed automatic with Tap-Up/Tap-Down shifting, sport suspension with tower-to-tower brace, 18" alloy wheels, universal home remote transmitter, rear spoiler and hydraulic power steering.

Changes to differentiate the SS from the lower trims include three-spoke, leather wrapped steering wheel with SS badge, sport cloth and leather seats, side skirts, chrome tip exhausts, and more aggressive front and rear clips.[19] The Malibu was redesigned for the 2008 model year by Bryan Nesbitt,[21] under the direction of GM Vice Chairman Robert Lutz—who was determined to make the nameplate competitive with Japanese mid-size cars.

Extensive engineering and design went into the remodel.[22] Trim levels were Base (2008 only), LS, LT, Hybrid (2008 and 2009 only), and the range-topping LTZ.

The top-line LTZ cars had clear brake light lenses with red LEDs, the balance of trim packages retaining red lenses with conventional brake light bulbs.

The seventh generation Malibu is built on a revised version of the long-wheelbase Epsilon platform shared with the Saturn Aura, the Opel Signum, and Pontiac G6.

It is assembled in Kansas City, Kansas.

Overall, it is three inches (76 mm) longer, with a six inches (152 mm) longer wheelbase.

Interior room remains mid-size, like the previous Malibus, and has been decreased from 101 cubic feet (2.9 m3) to 97.7 cubic feet (2.8 m3), despite having a longer wheelbase, although front legroom has increased from 41.9 in (1,064 mm) to 42.2 in (1,072 mm).[17][23] Rear legroom has decreased from 38.5 in (978 mm) to 37.6 in (955 mm).[24] The interior design was revised, with a selection of two-tone color combinations (brick and tan two-tone), telescoping steering wheel, higher-quality materials and a twin-cowl dash design.[25] Drag (Cd) is at 0.33.[26] The seventh generation Malibu offered these engine choices: The 2.4 L I4 and 3.6 L V6 engine have aluminum blocks and heads, dual overhead cams, four valves per cylinder, twin balance shafts, and variable valve timing.

The 3.5 L V6 has aluminum heads, an iron block, overhead valves, and limited variable valve timing.

The 3.5 L V6 was offered as an upgrade for special-order fleet vehicles, to replace the Ecotec engine, and generally was not available for retail customers.

The 3.5 L V6 was not available in the LTZ.

The 3.5 L V6 with four-speed transmission has been the only drivetrain available in the 2008, 2009, and 2010 models in Israel.

Partway through the 2008 model year, the 2.4 L Ecotec was offered with a six-speed automatic transmission to improve performance and fuel economy.[25] For 2009 models, the six-speed transmission mated to the 2.4 L 4-cylinder engine or the 217 horsepower 3.5 L V6 mated to the four-speed automatic were made available on the 1LT; the six-speed became standard on 2LT models the same year.

The LS models were equipped with the four-speed transmission only.

A manual transmission was not offered.[27] All models are front-wheel-drive sedans.

Chevrolet dropped the Malibu MAXX station wagon model.

Partway through the 2010 model year, the GM badges were removed from the front doors.[28][29] OnStar was included on all Malibu models as standard equipment (excluding fleet vehicles, where this feature is optional).

Six air bags were also standard on the seventh generation Malibu; two dual-stage front bags, two side-impact curtain air bags protecting the heads of both front and rear passengers, and two side-impact thorax bags mounted in the front seats.

Traction control, electronic tire pressure monitoring system, four-wheel disc brakes, antilock brakes, and daytime running lamps were standard included safety features on all Malibus.

GM's StabiliTrak brand electronic stability control was standard on all models including the very base LS model.

In 2011, the base LS 1LS Malibu gained more standard features, like Bluetooth technology with stereo audio playback capability, a remote USB and iPod/iPhone port, remote start, a security alarm, an upgraded OnStar system, power front driver's seat, chrome hubcap wheel covers, body-colored side mirrors with power adjustments and body-colored accents, a single wood dashboard accent, tinted windows, and a six-speed automatic transmission with overdrive and manual shift capabilities.

The LT 1LT model lost its available eight-speaker Bose premium sound system.

The LT 2LT got a package that included a sunroof, leather power heated seats, and more convenience and comfort features.

For 2011, the four-speed automatic transmission was dropped from the Malibu powertrain lineup.[30] This same model year also saw the deletion of the steering wheel mounted paddle shifters on 6AT cars in favor of a selector mounted rocker switch for manual operation; no reason was ever given for the change.

A BAS mild hybrid, with the base inline-4 like the Saturn Aura Green Line, was available offering an increased fuel economy of 24 mpg‑US (9.8 L/100 km; 29 mpg‑imp)/32 mpg‑US (7.4 L/100 km; 38 mpg‑imp), which for the 2009 model was increased to 26 mpg‑US (9.0 L/100 km; 31 mpg‑imp)/34 mpg‑US (6.9 L/100 km; 41 mpg‑imp).

The Malibu hybrid was dropped for the 2010 model year.[31][32] The 2008 Malibu received critical praise from the automotive press, with The New York Times referring to it as being "like a super Accord, but from GM" and Car and Driver magazine declaring, "Camry, Beware." It also garnered high praise from Motor Trend magazine, being rated higher than the Honda Accord and Nissan Altima in the magazine's 2008 Car of The Year competition.

Kelley Blue Book named it the "2008 best redesigned vehicle".[33] Car and Driver stated that while it would not be "enough to steal the top-dog sales title from the perennial Honda and Toyota mid-sizers", they noted "for the first time since Chevrolet revived the storied nameplate in 1997, it has enough of what it needs to sell in significant numbers to the public, not just rental fleets".[34] Edmunds.com praised the Malibu's interior and exterior styling, quietness, and balance between ride and handling, while criticizing the thick C-pillars that obstruct the driver's view, the narrower chassis compared to other midsize cars[35] (which reduces rear seating room and also lacks a center armrest) and lack of features such as dual-zone HVAC, Bluetooth compatibility, and keyless ignition.[36] While Robert Cumberford, design critic at Automobile magazine noted the interior of the platform-variant, the Saturn Aura featured cheap interior materials, he noted improved in the Malibu.[37][38] Writers of various reviews[who?] for the 2008 Malibu believed Chevrolet would be getting back on track in quality and excitement in the mid-size segment after a history of ordinary, bland offerings, such as the Celebrity, Corsica, Lumina, and even the previous two generations of Malibu since its 1997 revival.

In January 2008, the redesigned Malibu received the North American Car of the Year award at the North American International Auto Show in Detroit in voting among a panel of 50 automotive journalists in a field of entries, with the runners up being the 2008 Cadillac CTS and the 2008 Honda Accord.

The Malibu's win marked the second straight year a car built on GM's Epsilon platform won the North American COTY award with the 2007 North American COTY award having gone to the 2007 Saturn Aura.

Initial sales results were positive, with the Malibu joining the Cadillac CTS and Buick Enclave on a list of GM vehicles whose sales have exceeded expectations.

The redesigned Malibu sold more than 50% more units in 2008 than in 2007, increasing GM's mid-size market share to 8.4% from 5.7%, while the Camry and Accord percentages remained flat at about 21% and 17.5%, according to GM.

Sales to rental customers dropped to 27% of the total, as GM limited sales to rental fleets.[22][39] The short-lived Malibu Hybrid, along with its sister, the Saturn Aura Green Line, which share the powertrain and other major components, was particularly criticized due to its lack of fuel savings and cost (relative to a standard 4-cylinder Malibu), plus the Hybrid's worsened driving dynamics.[32][40] On September 21, 2012.

General Motors recalled 473,841 vehicles involving the Chevrolet Malibu, Pontiac G6 and Saturn Aura from model years 2007 through 2010 equipped with four-speed automatic transmissions.

The problem is a condition that could make cars roll when in park.

The recall affected 426,240 in the United States, 40,029 in Canada and 7,572 in other markets.[41][42] The 2013 Malibu moved to GM Epsilon II platform and debuted in Asia in late 2011, followed by North America in 2012.[47] The new Malibu became a global vehicle, replacing both the North American Malibu and GM Korea vehicles previously sold around the world.

The Malibu was unveiled as a show car simultaneously at Auto Shanghai in China (written as "迈锐宝", Mai-Rui-Bao [48]), and on Facebook, on April 18, 2011.[49][50] It was also shown at the New York International Auto Show in New York City later in April.

The eighth generation Malibu was available in the trim levels LS 1LS (not available for fleet-ordered models), the LT 1LT (this is the base model for fleet-ordered models), the LT 2LT, the ECO 1SB, the ECO 2SA, and the LTZ 1LZ.

Both ECO models officially went on sale in the spring of 2012, with the gas-only models following in late summer 2012.

The Turbo models followed in early 2013.

All models, aside from the LS 1LS, were equipped with a large touch-screen display using Chevrolet's MyLink and offering Pandora Internet Radio playback capabilities via a USB cable and an iPhone 4, 4S, or 5.

SiriusXM Travel Link was also included on all navigation-equipped Malibu models.

The eighth generation Malibu was sold in "nearly 100 countries on six continents".

In the United States, it is manufactured in two plants, Fairfax, KS and Detroit-Hamtramck.[51] In Australia and New Zealand, the Malibu replaced the Holden Epica, and made its debut in 2013 as the Holden Malibu.

It was positioned between the Holden Cruze and Holden Commodore.[52] In South Korea, the Malibu replaces the Daewoo Tosca, as GM has phased-out the Daewoo brand in favor of Chevrolet.

Korea was the first market to get the Malibu, in late 2011, followed by China later in 2011 and North America beginning in early 2012.[53] The Malibu made its Middle Eastern debut in 2012 replacing the Holden VE Commodore based Lumina.

In Europe, the Malibu replaced the Chevrolet Epica.

The facelifted Malibu was never sold in Europe.

In North America, the eighth generation Malibu continued to be sold in 2016 as the Malibu Limited as the next generation went on sale.[44][54] It was mostly identical to the 2015 model, but only featured the 2013 I4 engine variant (LCV instead of LKW) with auto stop-start.

In China, the eighth generation Malibu continues[when?] in production alongside the ninth generation Malibu.

It received a facelift in 2016.[55] A 1.5-liter turbo engine was added for the 2017 model year.[citation needed] The eighth generation Malibu was offered with four-cylinder engines and six-speed automatic transmissions.

The North American version was offered in 2.5L.[vague][citation needed] The European version was offered with a 2.4 L Ecotec engine with an aluminum block and cylinder head, and a 2.0 L Diesel (1,956 cc) VCDi developing 160 PS (120 kW).[56] The version offered in the Middle East had the 2.4 L Ecotec engine.[citation needed] Also available was a 3.0 L V6 engine making 260 hp and 290 NM.[57] In the Australian market two Holden-badged versions were offered, the CD and the CDX, with the 2.4 L Ecotec or 2.0 L diesel.[58] Standard safety features on the eighth generation Malibu include dual-stage front airbags for the driver and front passenger, along with pelvic/thorax side-impact and knee airbags up front also.

Roof rail airbags with rollover protection are standard.

Also available as option extras are second-row head/thorax side-impact airbags, lane departure warning system with forward collision alert and a rearview camera system.[62] In a March 2012 comparison test by Car and Driver, the “light electrification" Chevrolet Malibu Eco hybrid came in sixth place out of six cars.[63] The Eco is not a Malibu LS, LT, or LTZ.

The Malibu Eco was criticized for its reduced wheelbase, causing a 0.8" reduction in legroom for back seat passengers.

The interior was also criticized for being disappointing and cramped.

The ride, however, was said to be smooth and quiet, with the only problem being the stiff steering.[64] The 2014 Chevrolet Malibu received the highest score in its class from J.D.

Power's 2014 Initial Quality Study.[65] The IQS study "examines problems experienced by vehicle owners during the first 90 days of ownership." [66] Eighteen months after the 2013 Malibu's debut, it received a mild refresh.

The changes included additional technology, improved fuel economy, and front-end styling that more closely matched the refreshed Chevrolet Traverse and the newly redesigned Chevrolet Impala.

Minor changes were made to the center console to deliver a longer armrest said to be more comfortable and a pair of cup holders and mobile-phone bins instead of the previous covered storage area.[67] The Chinese model received a refreshed front end with revised headlamps.[citation needed] Among the technology that Chevrolet debuted on the 2014 Malibu was a new six-speed transmission.

Because the transmission was designed to reduce the energy required to pump transmission fluid, it contributed to fuel savings on the refreshed Malibu.

In addition, for the first time in a non-hybrid GM vehicle, an engine stop/start system came standard with the 2.5 L engine.[68] EPA fuel-economy estimates showed an improvement to 25/36 mpg city/highway, up from the 2013 model's 22/34 for the base 2.5 L engine.[69] The 2014 Malibu was available for purchase in late 2013.[70] On April 1, 2015, Chevrolet unveiled a redesigned Malibu at the 2015 New York Auto Show, which went on sale in late 2015 as a 2016 model.

The updated Malibu featured a sleeker, yet larger design similar to its full sized Impala.

The wheelbase was increased almost four inches, creating more interior space; but the fuel efficiency is improved, as it is nearly 300 pounds lighter than the eighth generation model.[74] The 2016 Malibu was offered in four trims: L, LS, LT, and Premier (replacing the LTZ trim).

The Malibu features an all-new, LFV Ecotec 1.5L turbo that is standard, while a 2.0 L turbocharged engine is offered as an optional feature.

No six-cylinder engine is available.

Other new features on the ninth-generation Malibu that were introduced for the 2016 model year include available OnStar 4G LTE in-vehicle connectivity, as well as available wireless phone charging, preventive safety technologies including ten standard air bags featuring forward collision avoidance system rear cross traffic alerts, and an optional automatic parking assist.

It features Forward Collision Alert with Following Distance Indicator, Adaptive Cruise Control with Front Automatic Braking, and Front Pedestrian Alert with last-second automatic braking.[75] It is also installed with start-stop ignition once the engine is at operating temperature and the brake is applied while the vehicle is stopped.

The 2016 Malibu features a first for the automotive industry, a teen driver feature, which allows parents to view their kids' driving statistics, such as maximum speed, warning alerts and more.[74] To operate the vehicle a parent enables the feature with a PIN in the settings menu of the Malibu's MyLink system, which allows them to register their teen's key fob.

The system's settings are turned on only to registered key fobs.[76] This technology also mutes the radio until the seat belts are buckled.[76] The 2016 Malibu comes equipped with both Apple CarPlay and Android Auto Capability features.

However, only one of their phone brands at any one time can be used.[77][78] A few months ahead of the 2016 model arriving in dealerships, Chevrolet announced that the Malibu had hit a milestone, with more than 10 million sold worldwide since the car was introduced 51 years earlier.[79] China and South Korea are currently the only two countries outside of North America where the 2016 Malibu is sold.

The ninth generation Malibu offers a full hybrid model for the first time, featuring a 1.8 L four-cylinder engine mated to a two-motor drive unit and electronically-controlled, continuously-variable automatic transaxle,[80] providing additional power to assist the engine during acceleration for 182 horsepower of total system power.

An Exhaust Gas Heat Recovery system allows the engine and cabin to warm up during winter conditions, while an 80-cell, 1.5 kWh lithium-ion battery pack provides electric power to the hybrid system, powering the Malibu Hybrid up to 55 mph (89 km/h) on electricity alone, while the gasoline-powered engine automatically comes on at higher speeds and loads to provide additional power.

The Malibu Hybrid uses a transmission ("two motor drive unit" in GM terms) similar to the second generation Chevrolet Volt,[75] but a much smaller battery, no plug-in option and a different engine.

The following table compares the fuel economy for all variants of the 2016 model year Malibu.

The Hybrid version will be discontinued in 2020 due to decreasing sales, leaving Chevrolet without hybrid cars in their North American lineup, only gasoline versions of it.

Their electric offering, the Bolt, is still on sale.[81] Chevrolet updated the Malibu in 2018 for the 2019 model year.[84] A new larger front grille, split by a chrome bar with the Chevrolet bow-tie, dominates the front, while the rear change is less significant.

The Premier trim adds LED headlamps while the L/LS/RS/LT/Hybrid trims maintain halogen headlamps.[85] A new RS trim-line is added for a sportier appearance, with a black grille, unique 18-inch wheels, and dual exhaust.

The touchscreen is replaced with the 8-inch Chevrolet Infotainment 3 in the L/LS/RS/LT trims and Chevrolet Infortainment 3 Plus with HD screen in the Hybrid and Premier trims.[86] Heated Second Row Seats are added to the Premier trim.

The standard 1.5 L engine is now paired with a CVT instead of the 6-speed automatic transmission.[87] Safety features were also improved for the 2019 Malibu including Low Speed Forward Automatic Braking, IntelliBeam high-beam assist headlamps and an semi-automated parking system.[88]

16S ribosomal RNA

16S ribosomal RNA (or 16S rRNA) is the component of the 30S small subunit of a prokaryotic ribosome that binds to the Shine-Dalgarno sequence.

The genes coding for it are referred to as 16S rRNA gene and are used in reconstructing phylogenies, due to the slow rates of evolution of this region of the gene.[2] Carl Woese and George E.

Fox were two of the people who pioneered the use of 16S rRNA in phylogenetics in 1977.[3] Multiple sequences of the 16S rRNA gene can exist within a single bacterium.[4] It has several functions: The 16S rRNA gene is used for phylogenetic studies[6] as it is highly conserved between different species of bacteria and archaea.[7] Carl Woese pioneered this use of 16S rRNA.[2] It is suggested that 16S rRNA gene can be used as a reliable molecular clock because 16S rRNA sequences from distantly related bacterial lineages are shown to have similar functionalities.[8] Some thermophilic archaea (e.g.

order Thermoproteales) contain 16S rRNA gene introns that are located in highly conserved regions and can impact the annealing of "universal" primers.[9] Mitochondrial and chloroplastic rRNA are also amplified.

The most common primer pair was devised by Weisburg et al.[6] and is currently referred to as 27F and 1492R; however, for some applications shorter amplicons may be necessary, for example for 454 sequencing with titanium chemistry the primer pair 27F-534R covering V1 to V3.[10] Often 8F is used rather than 27F.

The two primers are almost identical, but 27F has an M instead of a C.

AGAGTTTGATCMTGGCTCAG compared with 8F.[11] In addition to highly conserved primer binding sites, 16S rRNA gene sequences contain hypervariable regions that can provide species-specific signature sequences useful for identification of bacteria.[16][17] As a result, 16S rRNA gene sequencing has become prevalent in medical microbiology as a rapid and cheap alternative to phenotypic methods of bacterial identification.[18] Although it was originally used to identify bacteria, 16S sequencing was subsequently found to be capable of reclassifying bacteria into completely new species,[19] or even genera.[6][20] It has also been used to describe new species that have never been successfully cultured.[21][22] With third-generation sequencing coming to many labs, simultaneous identification of thousands of 16S rRNA sequences is possible within hours, allowing metagenomic studies, for example of gut flora.[23] The bacterial 16S gene contains nine hypervariable regions (V1–V9), ranging from about 30 to 100 base pairs long, that are involved in the secondary structure of the small ribosomal subunit.[24] The degree of conservation varies widely between hypervariable regions, with more conserved regions correlating to higher-level taxonomy and less conserved regions to lower levels, such as genus and species.[25] While the entire 16S sequence allows for comparison of all hypervariable regions, at approximately 1,500 base pairs long it can be prohibitively expensive for studies seeking to identify or characterize diverse bacterial communities.[25] These studies commonly utilize the Illumina platform, which produces reads at rates 50-fold and 12,000-fold less expensive than 454 pyrosequencing and Sanger sequencing, respectively.[26] While cheaper and allowing for deeper community coverage, Illumina sequencing only produces reads 75–250 base pairs long (up to 300 base pairs with Illumina MiSeq), and has no established protocol for reliably assembling the full gene in community samples.[27] Full hypervariable regions can be assembled from a single Illumina run, however, making them ideal targets for the platform.[27] While 16S hypervariable regions can vary dramatically between bacteria, the 16S gene as a whole maintains greater length homogeneity than its eukaryotic counterpart (18S ribosomal RNA), which can make alignments easier.[28] Additionally, the 16S gene contains highly conserved sequences between hypervariable regions, enabling the design of universal primers that can reliably produce the same sections of the 16S sequence across different taxa.[29] Although no hypervariable region can accurately classify all bacteria from domain to species, some can reliably predict specific taxonomic levels.[25] Many community studies select semi-conserved hypervariable regions like the V4 for this reason, as it can provide resolution at the phylum level as accurately as the full 16S gene.[25] While lesser-conserved regions struggle to classify new species when higher order taxonomy is unknown, they are often used to detect the presence of specific pathogens.

In one study by Chakravorty et al.

in 2007, the authors characterized the V1–V8 regions of a variety of pathogens in order to determine which hypervariable regions would be most useful to include for disease-specific and broad assays.[30] Amongst other findings, they noted that the V3 region was best at identifying the genus for all pathogens tested, and that V6 was the most accurate at differentiating species between all CDC-watched pathogens tested, including anthrax.[30] While 16S hypervariable region analysis is a powerful tool for bacterial taxonomic studies, it struggles to differentiate between closely related species.[29] In the families Enterobacteriaceae, Clostridiaceae, and Peptostreptococcaceae, species can share up to 99% sequence similarity across the full 16S gene.[31] As a result, the V4 sequences can differ by only a few nucleotides, leaving reference databases unable to reliably classify these bacteria at lower taxonomic levels.[31] By limiting 16S analysis to select hypervariable regions, these studies can fail to observe differences in closely related taxa and group them into single taxonomic units, therefore underestimating the total diversity of the sample.[29] Furthermore, bacterial genomes can house multiple 16S genes, with the V1, V2, and V6 regions containing the greatest intraspecies diversity.[7] While not the most precise method of classifying bacterial species, analysis of the hypervariable regions remains one of the most useful tools available to bacterial community studies.[31] Under the assumption that evolution is driven by vertical transmission, 16S rRNA genes have long been believed to be species-specific, and infallible as genetic markers inferring phylogenetic relationships among prokaryotes.

However, a growing number of observations suggest the occurrence of horizontal transfer of these genes.

In addition to observations of natural occurrence, transferability of these genes is supported experimentally using a specialized Escherichia coli genetic system.

Using a null mutant of E.

coli as host, growth of the mutant strain was shown to be complemented by foreign 16S rRNA genes that were phylogenetically distinct from E.

coli at the phylum level.[32][33] Such functional compatibility was also seen in Thermus thermophilus.[34] Furthermore, in T.

thermophilus, both complete and partial gene transfer was observed.

Partial transfer resulted in spontaneous generation of apparently random chimera between host and foreign bacterial genes.

Thus, 16S rRNA genes may have evolved through multiple mechanisms, including vertical inheritance and horizontal gene transfer; the frequency of the latter may be much higher than previously thought.

The 16S rRNA gene is used as the standard for classification and identification of microbes, because it is present in most microbes and shows proper changes.[35] Type strains of 16S rRNA gene sequences for most bacteria and archaea are available on public databases, such as NCBI.

However, the quality of the sequences found on these databases is often not validated.

Therefore, secondary databases that collect only 16S rRNA sequences are widely used.

The most frequently used databases are listed below: EzBioCloud database, formerly known as EzTaxon, consists of a complete hierarchical taxonomic system containing 62,988 bacteria and archaea species/phylotypes which includes 15,290 valid published names as of September 2018.

Based on the phylogenetic relationship such as maximum-likelihood and OrthoANI, all species/subspecies are represented by at least one 16S rRNA gene sequence.

The EzBioCloud database is systematically curated and updated regularly which also includes novel candidate species.

Moreover, the website provides bioinformatics tools such as ANI calculator, ContEst16S and 16S rRNA DB for QIIME and Mothur pipeline.[36] The Ribosomal Database Project (RDP) is a curated database that offers ribosome data along with related programs and services.

The offerings include phylogenetically ordered alignments of ribosomal RNA (rRNA) sequences, derived phylogenetic trees, rRNA secondary structure diagrams and various software packages for handling, analyzing and displaying alignments and trees.

The data are available via ftp and electronic mail.

Certain analytic services are also provided by the electronic mail server.[37] SILVA provides comprehensive, quality checked and regularly updated datasets of aligned small (16S/18S, SSU) and large subunit (23S/28S, LSU) ribosomal RNA (rRNA) sequences for all three domains of life as well as a suite of search, primer-design and alignment tools (Bacteria, Archaea and Eukarya).[38] Greengenes is a quality controlled, comprehensive 16S reference database and taxonomy based on a de novo phylogeny that provides standard operational taxonomic unit sets.

It is no longer maintained actively and it was last updated in 2013.[39][40]

Smoked salmon

Smoked salmon is a preparation of salmon, typically a fillet that has been cured and hot or cold smoked.

Due to its moderately high price, Smoked salmon is considered a delicacy.

Although the term lox is sometimes applied to Smoked salmon, they are different products.[1] Smoked salmon is a popular ingredient in canapés, often combined with cream cheese and lemon juice.[citation needed] In New York City and Philadelphia and other cities of North America, Smoked salmon is known as "nova" after the sources in Nova Scotia, and is likely to be sliced very thinly and served on bagels with cream cheese or with sliced red onion, lemon and capers.

In Pacific Northwest cuisine of the United States and Canada, Smoked salmon may also be fillets or nuggets, including hickory or alder-smoked varieties and candied salmon (smoked and honey, or sugar-glazed, also known as "Indian candy").[citation needed] In Europe, Smoked salmon may be found thinly sliced or in thicker fillets, or sold as chopped "scraps" for use in cooking.

It is often used in pâtés, quiches and pasta sauces.

Scrambled eggs with Smoked salmon mixed in is another popular dish.

Smoked salmon salad is a strong-flavored salad, with ingredients such as iceberg lettuce, boiled eggs, tomato, olives, capers and leeks, and with flavored yogurt as a condiment.[citation needed] Slices of Smoked salmon are a popular appetizer in Europe, usually served with some kind of bread.

In the United Kingdom they are typically eaten with brown bread and a squeeze of lemon.

In Germany they are eaten on toast or black bread.

In Jewish cuisine, heavily-salted salmon is called lox and is usually eaten on a bagel with cream cheese.[2] Lox is often smoked.

Smoked salmon is sometimes used in sushi, though not widely in Japan; it is more likely to be encountered in North American sushi bars.[citation needed] The Philly Roll combines Smoked salmon and cream cheese and rolls these in rice and nori.

Smoking is used to preserve salmon against microorganism spoilage.[3] During the process of smoking salmon the fish is cured and partially dehydrated, which impedes the activity of bacteria.[4] An important example of this is Clostridium botulinum, which can be present in seafood,[5] and which is killed by the high heat treatment which occurs during the smoking process.

Smoked salmon has featured in the cultures of the Native Americans for a long time.

Smoked salmon was also a common dish in Greek and Roman culture throughout history, often being eaten at large gatherings and celebrations.[3] During the Middle Ages, Smoked salmon became part of people's diet and was consumed in soups and salads.[3] The first smoking factory was from Poland in the 7th century A.D.[4] The 19th century marked the rise of the American Smoked salmon industry in the West Coast, processing Pacific salmon from Alaska and Oregon.[3] Salmon is a fish with high fat content and Smoked salmon is a good source of omega-3 fatty acids including docosahexaenoic acid (DHA) and eicosapentaenoic acid (EPA).[6][7] Smoked salmon has a high sodium content due to the salt added during brining and curing.[7] Three ounces of Smoked salmon contains approximately 660 mg of sodium, while an equivalent portion of fresh cooked salmon contains about 50 mg.[7] Although high salt content prevents the growth of microorganisms in Smoked salmon by limiting water activity,[7] the American Heart Association recommends limiting sodium consumption.[8] Smoked foods, including Smoked salmon also contain nitrates and nitrites which are by-products of the smoking process.[8] Nitrites and nitrates can be converted into nitrosamines, some of which are carcinogenic.[8] However, Smoked salmon is not a major source of nitrosamine exposure to humans.[9] Studies have been conducted in which some of the sodium chloride used in smoking salmon had been replaced by potassium chloride.

The study found that up to one third of the sodium chloride can be replaced by potassium chloride without changing the sensory properties of the Smoked salmon.[10] Although potassium chloride has a bitter and metallic taste, the saltiness of the Smoked salmon might have masked its undesirable flavor.[10] *Based on a 2,000 calorie diet - Serving size: about 3 oz, (4 slices) cooked, about 85 grams In the Atlantic basin all Smoked salmon comes from the Atlantic salmon, much of it farmed in Norway, Scotland, Ireland and the east coast of Canada (particularly in the Bay of Fundy).

In the Pacific, a variety of salmon species may be used.

Because fish farming is prohibited by state law, all of Alaska's salmon species are wild Pacific species.

Pacific species of salmon include chinook ("King"), sockeye ("red"), coho ("silver"), chum (keta), and pink ("humpback").

Most Smoked salmon is cold smoked, typically at 37 °C (99 °F).

Cold smoking does not cook the fish, resulting in a delicate texture.

Although some smoke houses go for a deliberately 'oaky' style with prolonged exposure to smoke from oak chips, industrial production favours less exposure to smoke and a blander style, using cheaper woods.

Originally, prepared fish were upside hung in lines on racks, or tenters, within the kiln.

Workers would climb up and straddle the racks while hanging the individual lines in ascending order.

Small circular wood chip fires would be lit at floor level and allowed to smoke slowly throughout the night.

The wood fire was damped with sawdust to create smoke; this was constantly tended as naked flames would cook the fish rather than smoke it.

The required duration of smoking has always been gauged by a skilled or 'master smoker' who manually checks for optimum smoking conditions.

Smoked salmon was introduced into the UK from Eastern Europe.

Jewish immigrants from Russia and Poland brought the technique of salmon smoking to London's east End, where they settled, in the late 19th century.

They Smoked salmon as way to preserve it as refrigeration was very basic.

In the early years, they were not aware that there was a salmon native to the UK so they imported Baltic salmon in barrels of salt water.

However, having discovered the wild Scottish salmon coming down to the fish market at Billingsgate each summer, they started smoking these fish instead.

The smoking process has changed over the years and many contemporary smokehouses have left the traditional methods using brick kilns behind in favour of commercial methods.

Only a handful of traditional smokehouses remain such as John Ross Jr (Aberdeen) Ltd and the Stornoway Smokehouse in the Outer Hebrides.

The oldest smokehouse in Scotland is the Old Salmon Fish House built on the banks of the River Ugie in 1585, although not at first for smoking.[11] The oldest smokehouse in England is the 1760 Old Smokehouse in Raglan Street, Lowestoft.[12] The Northwest Indian Tribes and Alaska Natives have a cold smoking style that is wholly unique, resulting in a dried, "jerky-style" Smoked salmon.

In the Pacific Northwest this style of salmon has been used for centuries as a primary source of food for numerous Indian tribes.

Traditionally Smoked salmon has been a staple of north-western American tribes and Canadian First Nations people.

To preserve indefinitely in modern times, the fish is typically pressure-cooked.[citation needed] Commonly used for both salmon and trout, hot smoking 'cooks' the salmon making it less moist, and firmer, with a less delicate taste.

It may be eaten like cold Smoked salmon, or mixed with salads or pasta.

It is essential to brine the salmon sufficiently and dry the skin enough to form a pellicle prior to smoking.

Without a sufficient pellicle, albumin will ooze out of the fish as it cooks, resulting in an unsightly presentation.[citation needed] There are three main curing methods that are typically used to cure salmon prior to smoking.

The proteins in the fish are modified (denatured) by the salt, which enables the flesh of the salmon to hold moisture better than it would if not brined.

In the United States, the addition of salt is regulated by the FDA as it is a major processing aid to ensure the safety of the product.

The sugar is hydrophilic, and also adds to the moistness of the Smoked salmon.

Salt and sugar are also preservatives, extending the storage life and freshness of the salmon.

Table salt (iodized salt) is not used in any of these methods, as the iodine can impart a dark color and bitter taste to the fish.[citation needed] Indian hard Smoked salmon is first kippered with salt, sugar and spices and then smoked until hard and jerky-like.

See cured salmon.

The Scandinavian dish gravlax is cured, but is not smoked.[citation needed] In British Columbia, canning salmon can be traced back to Alexander Loggie in 1870 who established the first recorded commercial cannery on the Fraser River.

Canning soon became the preferred method of preserving salmon in BC growing from three canneries in 1876 to more than ninety by the turn of the century.

Sockeye and Pink Salmon make up the majority of canned salmon, with the traditional product containing skin and bones – important sources of calcium and nutrients.[13] The enzymes of fish operate at an optimum temperature of about 5 °C, the temperature of the water from which they came.[14] Bacteriologically sterile, fish still have a large number of bacteria in their slimy surface and digestive tracts.

These bacteria multiply rapidly once the fish dies and start to attack the tissues.

The growth of microorganism can greatly affect the quality of the salmon.[14] The salmon is first dressed and washed, then cut into pieces and filled in cans (previously sterilized) in saline.

The cans must then undergo a double steaming process in a vacuum-sealed environment.

The steam is pressurized at 121.1 °C for 90 minutes to kill any bacteria.

After heating, the cans are cooled under running water, dried and stored in a controlled environment between 10–15.5 °C.[14] Before leaving the canneries, they are examined to ensure both the can integrity and safety of the fish.

The Canadian Food and Inspection Agency (CFIA) is responsible for policies, labeling requirements, permitted additives, and inspections for all fish products.[15] All establishments which process fish for export or inter-provincial trade must be registered federally and implement a Quality Management Program (QMP) plan.[15] Cooking low-acid food items in a retortable pouch is a relatively new process, with the first commercial use of such retort pouches found in Italy in 1960, Denmark in 1966, and in Japan in 1969.[16] It consists of enclosing the fish in "a multilayer flexible packaging consisting mainly of polypropylene (PP), aluminum foil, and polyester (PET)" instead of the metal can or glass jar used in canning; but from there the technique is quite similar.

Four different retort pouch structures were used; namely cast polypropylene (CPP), polyethylene terephthalate (PET)/silicon oxide-coated nylon/CPP (SIOX), Aluminum oxide-coated PET/nylon/CPP (ALOX), and PET/aluminum foil/CPP (FOIL).[17] In the UK, "Scottish Smoked salmon" is sometimes used to refer to salmon that is smoked in Scotland but sourced from elsewhere.[19][20] This is despite Food Standards Agency recommendations that such salmon be described as "Salmon smoked in Scotland" instead.[21] Labelling must also include the method of production ('farmed', 'cultivated', 'caught').[22] Smoked salmon jerky is a dehydrated salmon product that is bought ready to eat for consumers and requires no further refrigeration or cooking.

(Note there are "fresh" non-heat treated versions made by smaller local producers that require refrigeration.) It is typically made from the trimmings and by-products of salmon products in other smoking facilities.[23] Smoked salmon jerky undergoes the most heat processing of all other Smoked salmon products yet still maintains its quality as a good source of omega-3 fatty acids.[24] The two main processing techniques for salmon jerky are wet-brining and dry salting.

In both cases the salmon is trimmed into narrow slices and then stored cold for less than one day.

After being skinned and frozen, if the fish is to undergo the brining method it will require an additional step in which the salmon is left soaking in wet brine (salt solution) for one hour.

It is then removed and the excess water is discarded.

After this, in both the wet-brining and dry salting method, ingredients such as non-iodized salt, potato starch, or light brown sugar are added.[24] In some Smoked salmon jerky products preservatives may also be added to extend the shelf life of the final product.[23] The salmon is then minced with the additives and reformed into thin strips that will be smoked for twenty hours.

Between the brining and salting methods for Smoked salmon jerky the brining method has been found to leave the salmon more tender with up to double the moisture content of salted jerky.

The salmon jerky that undergoes the dry salting method has a tougher texture due to the lower moisture content and water activity.

Both forms of salmon jerky still have a much lower moisture content than is found in raw salmon.[24] Smoked salmon jerky is packaged using aseptic packaging to ensure the product is in a sterilized environment.

The Smoked salmon jerky is commonly packaged in a vacuum sealed bag in which the oxygen has been removed, or in a controlled atmospheric package in which the oxygen has been replaced with Nitrogen to inhibit the growth of microorganisms.[25] Because of the high heat nature of which Smoked salmon jerky is processed it is a shelf stable product.[26] Depending on the integrity of the packaging and if preservatives were used, Smoked salmon jerky may have an approximate shelf-life of six months to one year.[25] Smaller local producers of salmon jerky make a "fresh", non-heat treated product that is not shelf stable.

Solar water heating

Solar water heating (SWH) is the conversion of sunlight into heat for water heating using a solar thermal collector.

A variety of configurations is available at varying cost to provide solutions in different climates and latitudes.

SWHs are widely used for residential and some industrial applications.[1] A sun-facing collector heats a working fluid that passes into a storage system for later use.

SWH are active (pumped) and passive (convection-driven).

They use water only, or both water and a working fluid.

They are heated directly or via light-concentrating mirrors.

They operate independently or as hybrids with electric or gas heaters.[2] In large-scale installations, mirrors may concentrate sunlight into a smaller collector.

As of 2017, global solar hot water thermal capacity is 472 GW and the market is dominated by China, the United States and Turkey.[3] Barbados, Austria, Cyprus, Israel and Greece are the leading countries by capacity per capita.[3] Records of solar collectors in the U.S.

date to before 1900,[4] involving a black-painted tank mounted on a roof.

In 1896 Clarence Kemp of Baltimore enclosed a tank in a wooden box, thus creating the first 'batch water heater' as they are known today.

Frank Shuman built the world's first solar thermal power station in Maadi, Egypt, using parabolic troughs to power a 45 to 52 kilowatts (60 to 70 horsepower) engine that pumped 23,000 litres (6,000 US gal) of water per minute from the Nile River to adjacent cotton fields.

Flat-plate collectors for Solar water heating were used in Florida and Southern California in the 1920s.

Interest grew in North America after 1960, but especially after the 1973 oil crisis.

Solar power is in use in Australia, Canada, China, Germany, India, Israel, Japan, Portugal, Romania, Spain, the United Kingdom and the United States.

Israel, Cyprus and Greece are the per capita leaders in the use of Solar water heating systems supporting 30%–40% of homes.[5] Flat plate solar systems were perfected and used on a large scale in Israel.

In the 1950s a fuel shortage led the government to forbid heating water between 10 pm and 6 am.

Levi Yissar built the first prototype Israeli solar water heater and in 1953 he launched the NerYah Company, Israel's first commercial manufacturer of Solar water heating.[6] Solar water heaters were used by 20% of the population by 1967.

Following the energy crisis in the 1970s, in 1980 Israel required the installation of solar water heaters in all new homes (except high towers with insufficient roof area).[7] As a result, Israel became the world leader in the use of solar energy per capita with 85% of households using solar thermal systems (3% of the primary national energy consumption),[8] estimated to save the country 2 million barrels (320,000 m3) of oil a year.[9] In 2005, Spain became the world's first country to require the installation of photovoltaic electricity generation in new buildings, and the second (after Israel) to require the installation of Solar water heating systems, in 2006.[10] After 1960, systems were marketed in Japan.[4] Australia has a variety of national and state and regulations for solar thermal starting with MRET in 1997.[11][12][13] Solar water heating systems are popular in China, where basic models start at around 1,500 yuan (US$235), around 80% less than in Western countries for a given collector size.

At least 30 million Chinese households have one.

The popularity is due to efficient evacuated tubes that allow the heaters to function even under gray skies and at temperatures well below freezing.[14] The type, complexity and size of a Solar water heating system is mostly determined by: The minimum requirements of the system are typically determined by the amount or temperature of hot water required during winter, when a system's output and incoming water temperature are typically at their lowest.

The maximum output of the system is determined by the need to prevent the water in the system from becoming too hot.

Freeze protection measures prevent damage to the system due to the expansion of freezing transfer fluid.

Drainback systems drain the transfer fluid from the system when the pump stops.

Many indirect systems use antifreeze (e.g., propylene glycol) in the heat transfer fluid.

In some direct systems, collectors can be manually drained when freezing is expected.

This approach is common in climates where freezing temperatures do not occur often, but can be less reliable than an automatic system as it relies on an operator.

A third type of freeze protection is freeze-tolerance, where low pressure water pipes made of silicone rubber simply expand on freezing.

One such collector now has European Solar Keymark accreditation.

When no hot water has been used for a day or two, the fluid in the collectors and storage can reach high temperatures in all non-drainback systems.

When the storage tank in a drainback system reaches its desired temperature, the pumps stop, ending the heating process and thus preventing the storage tank from overheating.

Some active systems deliberately cool the water in the storage tank by circulating hot water through the collector at times when there is little sunlight or at night, losing heat.

This is most effective in direct or thermal store plumbing and is virtually ineffective in systems that use evacuated tube collectors, due to their superior insulation.

Any collector type may still overheat.

High pressure, sealed solar thermal systems ultimately rely on the operation of temperature and pressure relief valves.

Low pressure, open vented heaters have simpler, more reliable safety controls, typically an open vent.

Simple designs include a simple glass-topped insulated box with a flat solar absorber made of sheet metal, attached to copper heat exchanger pipes and dark-colored, or a set of metal tubes surrounded by an evacuated (near vacuum) glass cylinder.

In industrial cases a parabolic mirror can concentrate sunlight on the tube.

Heat is stored in a hot water storage tank.

The volume of this tank needs to be larger with solar heating systems to compensate for bad weather[clarification needed] and because the optimum final temperature for the solar collector[clarification needed] is lower than a typical immersion or combustion heater.

The heat transfer fluid (HTF) for the absorber may be water, but more commonly (at least in active systems) is a separate loop of fluid containing anti-freeze and a corrosion inhibitor delivers heat to the tank through a heat exchanger (commonly a coil of copper heat exchanger tubing within the tank).

Copper is an important component in solar thermal heating and cooling systems because of its high heat conductivity, atmospheric and water corrosion resistance, sealing and joining by soldering and mechanical strength.

Copper is used both in receivers and primary circuits (pipes and heat exchangers for water tanks).[15] Another lower-maintenance concept is the 'drain-back'.

No anti-freeze is required; instead, all the piping is sloped to cause water to drain back to the tank.

The tank is not pressurized and operates at atmospheric pressure.

As soon as the pump shuts off, flow reverses and the pipes empty before freezing can occur.

Residential solar thermal installations fall into two groups: passive (sometimes called "compact") and active (sometimes called "pumped") systems.

Both typically include an auxiliary energy source (electric heating element or connection to a gas or fuel oil central heating system) that is activated when the water in the tank falls below a minimum temperature setting, ensuring that hot water is always available.

The combination of Solar water heating and back-up heat from a wood stove chimney[16] can enable a hot water system to work all year round in cooler climates, without the supplemental heat requirement of a Solar water heating system being met with fossil fuels or electricity.

When a Solar water heating and hot-water central heating system are used together, solar heat will either be concentrated in a pre-heating tank that feeds into the tank heated by the central heating, or the solar heat exchanger will replace the lower heating element and the upper element will remain to provide for supplemental heat.

However, the primary need for central heating is at night and in winter when solar gain is lower.

Therefore, Solar water heating for washing and bathing is often a better application than central heating because supply and demand are better matched.

In many climates, a solar hot water system can provide up to 85% of domestic hot water energy.

This can include domestic non-electric concentrating solar thermal systems.

In many northern European countries, combined hot water and space heating systems (solar combisystems) are used to provide 15 to 25% of home heating energy.

When combined with storage, large scale solar heating can provide 50-97% of annual heat consumption for district heating.[17][18] Direct or open loop systems circulate potable water through the collectors.

They are relatively cheap.

Drawbacks include: The advent of freeze-tolerant designs expanded the market for SWH to colder climates.

In freezing conditions, earlier models were damaged when the water turned to ice, rupturing one or more components.

Indirect or closed loop systems use a heat exchanger to transfer heat from the "heat-transfer fluid" (HTF) fluid to the potable water.

The most common HTF is an antifreeze/water mix that typically uses non-toxic propylene glycol.

After heating in the panels, the HTF travels to the heat exchanger, where its heat is transferred to the potable water.

Indirect systems offer freeze protection and typically overheat protection.

Passive systems rely on heat-driven convection or heat pipes to circulate the working fluid.

Passive systems cost less and require low or no maintenance, but are less efficient.

Overheating and freezing are major concerns.

Active systems use one or more pumps to circulate water and/or heating fluid.

This permits a much wider range of system configurations.

Pumped systems are more expensive to purchase and to operate.

However, they operate at higher efficiency and can be more easily controlled.

Active systems have controllers with features such as interaction with a backup electric or gas-driven water heater, calculation and logging of the energy saved, safety functions, remote access and informative displays.

An integrated collector storage (ICS or batch heater) system uses a tank that acts as both storage and collector.

Batch heaters are thin rectilinear tanks with a glass side facing the sun at noon.

They are simple and less costly than plate and tube collectors, but they may require bracing if installed on a roof (to support 400–700 lb (180–320 kg) lbs of water), suffer from significant heat loss at night since the side facing the sun is largely uninsulated and are only suitable in moderate climates.

A convection heat storage unit (CHS) system is similar to an ICS system, except the storage tank and collector are physically separated and transfer between the two is driven by convection.

CHS systems typically use standard flat-plate type or evacuated tube collectors.

The storage tank must be located above the collectors for convection to work properly.

The main benefit of CHS systems over ICS systems is that heat loss is largely avoided since the storage tank can be fully insulated.

Since the panels are located below the storage tank, heat loss does not cause convection, as the cold water stays at the lowest part of the system.

Pressurized antifreeze systems use a mix of antifreeze (almost always low-toxic propylene glycol) and water mix for HTF in order to prevent freeze damage.

Though effective at preventing freeze damage, antifreeze systems have drawbacks: A drainback system is an active indirect system where the HTF (usually pure water) circulates through the collector, driven by a pump.

The collector piping is not pressurized and includes an open drainback reservoir that is contained in conditioned or semi-conditioned space.

The HTF remains in the drainback reseervoir unless the pump is operating and returns there (emptying the collector) when the pump is switched off.

The collector system, including piping, must drain via gravity into the drainback tank.

Drainback systems are not subject to freezing or overheating.

The pump operates only when appropriate for heat collection, but not to protect the HTF, increasing efficiency and reducing pumping costs.[19] Plans for Solar water heating systems are available on the Internet.[20] DIY SWH systems are usually cheaper than commercial ones, and they are used both in the developed and developing world.[21] Solar thermal collectors capture and retain heat from the sun and use it to heat a liquid.[23] Two important physical principles govern the technology of solar thermal collectors: Flat plate collectors are an extension of the idea to place a collector in an 'oven'-like box with glass directly facing the Sun.[1] Most flat plate collectors have two horizontal pipes at the top and bottom, called headers, and many smaller vertical pipes connecting them, called risers.

The risers are welded (or similarly connected) to thin absorber fins.

Heat-transfer fluid (water or water/antifreeze mix) is pumped from the hot water storage tank or heat exchanger into the collectors' bottom header, and it travels up the risers, collecting heat from the absorber fins, and then exits the collector out of the top header.

Serpentine flat plate collectors differ slightly from this "harp" design, and instead use a single pipe that travels up and down the collector.

However, since they cannot be properly drained of water, serpentine flat plate collectors cannot be used in drainback systems.

The type of glass used in flat plate collectors is almost always low-iron, tempered glass.

Such glass can withstand significant hail without breaking, which is one of the reasons that flat-plate collectors are considered the most durable collector type.

Unglazed or formed collectors are similar to flat-plate collectors, except they are not thermally insulated nor physically protected by a glass panel.

Consequently, these types of collectors are much less efficient when water temperature exceeds ambient air temperatures.

For pool heating applications, the water to be heated is often colder than the ambient roof temperature, at which point the lack of thermal insulation allows additional heat to be drawn from the surrounding environment.[25] Evacuated tube collectors (ETC) are a way to reduce the heat loss,[1] inherent in flat plates.

Since heat loss due to convection cannot cross a vacuum, it forms an efficient isolation mechanism to keep heat inside the collector pipes.[26] Since two flat glass sheets are generally not strong enough to withstand a vacuum, the vacuum is created between two concentric tubes.

Typically, the water piping in an ETC is therefore surrounded by two concentric tubes of glass separated by a vacuum that admits heat from the sun (to heat the pipe) but that limits heat loss.

The inner tube is coated with a thermal absorber.[27] Vacuum life varies from collector to collector, from 5 years to 15 years.

Flat plate collectors are generally more efficient than ETC in full sunshine conditions.

However, the energy output of flat plate collectors is reduced slightly more than ETCs in cloudy or extremely cold conditions.[1] Most ETCs are made out of annealed glass, which is susceptible to hail, failing given roughly golf ball -sized particles.

ETCs made from "coke glass," which has a green tint, are stronger and less likely to lose their vacuum, but efficiency is slightly reduced due to reduced transparency.

ETCs can gather energy from the sun all day long at low angles due to their tubular shape.[28] One way to power an active system is via a photovoltaic (PV) panel.

To ensure proper pump performance and longevity, the (DC) pump and PV panel must be suitably matched.

Although a PV-powered pump does not operate at night, the controller must ensure that the pump does not operate when the sun is out but the collector water is not hot enough.

PV pumps offer the following advantages: A bubble pump (also known as geyser pump) is suitable for flat panel as well as vacuum tube systems.

In a bubble pump system, the closed HTF circuit is under reduced pressure, which causes the liquid to boil at low temperature as the sun heats it.

The steam bubbles form a geyser, causing an upward flow.

The bubbles are separated from the hot fluid and condensed at the highest point in the circuit, after which the fluid flows downward toward the heat exchanger caused by the difference in fluid levels.[30][31][32] The HTF typically arrives at the heat exchanger at 70 °C and returns to the circulating pump at 50 °C.

Pumping typically starts at about 50 °C and increases as the sun rises until equilibrium is reached.

A differential controller senses temperature differences between water leaving the solar collector and the water in the storage tank near the heat exchanger.

The controller starts the pump when the water in the collector is sufficiently about 8–10 °C warmer than the water in the tank, and stops it when the temperature difference reaches 3–5 °C.

This ensures that stored water always gains heat when the pump operates and prevents the pump from excessive cycling on and off.

(In direct systems the pump can be triggered with a difference around 4 °C because they have no heat exchanger.) The simplest collector is a water-filled metal tank in a sunny place.

The sun heats the tank.

This was how the first systems worked.[4] This setup would be inefficient due to the equilibrium effect: as soon as heating of the tank and water begins, the heat gained is lost to the environment and this continues until the water in the tank reaches ambient temperature.

The challenge is to limit the heat loss.

ICS or batch collectors reduce heat loss by thermally insulating the tank.[1][33] This is achieved by encasing the tank in a glass-topped box that allows heat from the sun to reach the water tank.[34] The other walls of the box are thermally insulated, reducing convection and radiation.[35] The box can also have a reflective surface on the inside.

This reflects heat lost from the tank back towards the tank.

In a simple way one could consider an ICS solar water heater as a water tank that has been enclosed in a type of 'oven' that retains heat from the sun as well as heat of the water in the tank.

Using a box does not eliminate heat loss from the tank to the environment, but it largely reduces this loss.

Standard ICS collectors have a characteristic that strongly limits the efficiency of the collector: a small surface-to-volume ratio.[36] Since the amount of heat that a tank can absorb from the sun is largely dependent on the surface of the tank directly exposed to the sun, it follows that the surface size defines the degree to which the water can be heated by the sun.

Cylindrical objects such as the tank in an ICS collector have an inherently small surface-to-volume ratio.

Collectors attempt to increase this ratio for efficient warming of the water.

Variations on this basic design include collectors that combine smaller water containers and evacuated glass tube technology, a type of ICS system known as an Evacuated Tube Batch (ETB) collector.[1] ETSCs can be more useful than other solar collectors during winter season.

ETCs can be used for heating and cooling purposes in industries like pharmaceutical and drug, paper, leather and textile and also for residential houses, hospitals, nursing home, hotels, swimming pool etc.

An ETC can operate at a range of temperatures from medium to high for solar hot water, swimming pool, air conditioning and solar cooker.

ETCs higher operational temperature range (up to 200 °C (392 °F)) makes them suitable for industrial applications such as steam generation, heat engine and solar drying.

Floating pool covering systems and separate STCs are used for pool heating.

Pool covering systems, whether solid sheets or floating disks, act as insulation and reduce heat loss.

Much heat loss occurs through evaporation, and using a cover slows evaporation.

STCs for nonpotable pool water use are often made of plastic.

Pool water is mildly corrosive due to chlorine.

Water is circulated through the panels using the existing pool filter or supplemental pump.

In mild environments, unglazed plastic collectors are more efficient as a direct system.

In cold or windy environments evacuated tubes or flat plates in an indirect configuration are used in conjunction with a heat exchanger.

This reduces corrosion.

A fairly simple differential temperature controller is used to direct the water to the panels or heat exchanger either by turning a valve or operating the pump.

Once the pool water has reached the required temperature, a diverter valve is used to return water directly to the pool without heating.[37] Many systems are configured as drainback systems where the water drains into the pool when the water pump is switched off.

The collector panels are usually mounted on a nearby roof, or ground-mounted on a tilted rack.

Due to the low temperature difference between the air and the water, the panels are often formed collectors or unglazed flat plate collectors.

A simple rule-of-thumb for the required panel area needed is 50% of the pool's surface area.[37] This is for areas where pools are used in the summer season only.

Adding solar collectors to a conventional outdoor pool, in a cold climate, can typically extend the pool's comfortable usage by months and more if an insulating pool cover is used.[25] When sized at 100% coverage most solar hot water systems are capable of heating a pool anywhere from as little as 4 °C for a wind-exposed pool, to as much as 10 °C for a wind-sheltered pool covered consistently with a solar pool blanket.[38] An active solar energy system analysis program may be used to optimize the solar pool heating system before it is built.

The amount of heat delivered by a Solar water heating system depends primarily on the amount of heat delivered by the sun at a particular place (insolation).

In the tropics insolation can be relatively high, e.g.

7 kWh/m² per day, versus e.g., 3.2 kWh/m² per day in temperate areas.

Even at the same latitude average insolation can vary a great deal from location to location due to differences in local weather patterns and the amount of overcast.

Calculators are available for estimating insolation at a site.[39][40][41] Below is a table that gives a rough indication of the specifications and energy that could be expected from a Solar water heating system involving some 2 m2 of absorber area of the collector, demonstrating two evacuated tube and three flat plate Solar water heating systems.

Certification information or figures calculated from those data are used.

The bottom two rows give estimates for daily energy production (kWh/day) for a tropical and a temperate scenario.

These estimates are for heating water to 50 °C above ambient temperature.

With most Solar water heating systems, the energy output scales linearly with the collector surface area.[42] The figures are fairly similar between the above collectors, yielding some 4 kWh/day in a temperate climate and some 8 kWh/day in a tropical climate when using a collector with a 2 m2 absorber.

In the temperate scenario this is sufficient to heat 200 litres of water by some 17 °C.

In the tropical scenario the equivalent heating would be by some 33 °C.

Many thermosiphon systems have comparable energy output to equivalent active systems.

The efficiency of evacuated tube collectors is somewhat lower than for flat plate collectors because the absorbers are narrower than the tubes and the tubes have space between them, resulting in a significantly larger percentage of inactive overall collector area.

Some methods of comparison[43] calculate the efficiency of evacuated tube collectors based on the actual absorber area and not on the space occupied as has been done in the above table.

Efficiency is reduced at higher temperatures.

In sunny, warm locations, where freeze protection is not necessary, an ICS (batch type) solar water heater can be cost effective.[35] In higher latitudes, design requirements for cold weather add to system complexity and cost.

This increases initial costs, but not life-cycle costs.

The biggest single consideration is therefore the large initial financial outlay of Solar water heating systems.[44] Offsetting this expense can take years.[45] The payback period is longer in temperate environments.[46] Since solar energy is free, operating costs are small.

At higher latitudes, solar heaters may be less effective due to lower insolation, possibly requiring larger and/or dual-heating systems.[46] In some countries government incentives can be significant.

Cost factors (positive and negative) include: Payback times can vary greatly due to regional sun, extra cost due to frost protection needs of collectors, household hot water use etc.

For instance in central and southern Florida the payback period could easily be 7 years or less rather than the 12.6 years indicated on the chart for the U.S.[47] The payback period is shorter given greater insolation.

However, even in temperate areas, Solar water heating is cost effective.

The payback period for photovoltaic systems has historically been much longer.[46] Costs and payback period are shorter if no complementary/backup system is required.[45] thus extending the payback period of such a system.

Australia operates a system of Renewable Energy Credits, based on national renewable energy targets.[51] The Toronto Solar Neighbourhoods Initiative offers subsidies for the purchase of Solar water heating units.[61] The source of electricity in an active SWH system determines the extent to which a system contributes to atmospheric carbon during operation.

Active solar thermal systems that use mains electricity to pump the fluid through the panels are called 'low carbon solar'.

In most systems the pumping reduces the energy savings by about 8% and the carbon savings of the solar by about 20%.[62] However, low power pumps operate with 1-20W.[63][64] Assuming a solar collector panel delivering 4 kWh/day and a pump running intermittently from mains electricity for a total of 6 hours during a 12-hour sunny day, the potentially negative effect of such a pump can be reduced to about 3% of the heat produced.

However, PV-powered active solar thermal systems typically use a 5–30 W PV panel and a small, low power diaphragm pump or centrifugal pump to circulate the water.

This reduces the operational carbon and energy footprint.

Alternative non-electrical pumping systems may employ thermal expansion and phase changes of liquids and gases.

Recognised standards can be used to deliver robust and quantitative life cycle assessments (LCA).

LCA considers the financial and environmental costs of acquisition of raw materials, manufacturing, transport, using, servicing and disposal of the equipment.

Elements include: In terms of energy consumption, some 60% goes into the tank, with 30% towards the collector[65] (thermosiphon flat plate in this case).

In Italy,[66] some 11 giga-joules of electricity are used in producing SWH equipment, with about 35% goes toward the tank, with another 35% towards the collector.

The main energy-related impact is emissions.

The energy used in manufacturing is recovered within the first 2–3 years of use (in southern Europe).

By contrast the energy payback time in the UK is reported as only 2 years.

This figure was for a direct system, retrofitted to an existing water store, PV pumped, freeze tolerant and of 2.8 sqm aperture.

For comparison, a PV installation took around 5 years to reach energy payback, according to the same comparative study.[67] In terms of CO2 emissions, a large fraction of the emissions saved is dependent on the degree to which gas or electricity is used to supplement the sun.

Using the Eco-indicator 99 points system as a yardstick (i.e.

the yearly environmental load of an average European inhabitant) in Greece,[65] a purely gas-driven system may have fewer emissions than a solar system.

This calculation assumes that the solar system produces about half of the hot water requirements of a household.

But because methane (CH4) emissions from the natural gas fuel cycle[68] dwarf the greenhouse impact of CO2, the net greenhouse emissions (CO2e) from gas-driven systems are vastly greater than for solar heaters, especially if supplemental electricity is also from carbon-free generation.[citation needed] A test system in Italy produced about 700 kg of CO2, considering all the components of manufacture, use and disposal.

Maintenance was identified as an emissions-costly activity when the heat transfer fluid (glycol-based) was replaced.

However, the emissions cost was recovered within about two years of use of the equipment.[66] In Australia, life cycle emissions were also recovered.

The tested SWH system had about 20% of the impact of an electrical water heater and half that of a gas water heater.[45] Analysing their lower impact retrofit freeze-tolerant Solar water heating system, Allen et al.

(qv) reported a production CO2 impact of 337 kg, which is around half the environmental impact reported in the Ardente et al.

(qv) study.

All relevant participants of the Large-scale Renewable Energy Target and Small-scale Renewable Energy Scheme must comply with the above Acts.[70]

Seal (East Asia)

A seal, in an East and Southeast Asian context is a general name for printing stamps and impressions thereof which are used in lieu of signatures in personal documents, office paperwork, contracts, art, or any item requiring acknowledgement or authorship.

In the western world they were traditionally known by traders as chop marks or simply chops.

The process started in China and soon spread across East Asia.

China, Japan and Korea currently use a mixture of seals and hand signatures, and increasingly, electronic signatures.[1] Chinese seals are typically made of stone, sometimes of metals, wood, bamboo, plastic, or ivory, and are typically used with red ink or cinnabar paste (Chinese: 朱砂; pinyin: zhūshā).

The word 印 ("yìn" in Mandarin, "in" in Japanese and Korean, pronounced the same)[dubious – discuss] specifically refers to the imprint created by the seal, as well as appearing in combination with other ideographs in words related to any printing, as in the word "印刷", "printing", pronounced "yìnshuā" in Mandarin, "insatsu" in Japanese.

The colloquial name chop, when referring to these kinds of seals, was adapted from the Hindi word chapa and from the Malay word cap[2] meaning stamp or rubber stamps.

In Japan, seals (hanko) have historically been used to identify individuals involved in government and trading from ancient times.

The Japanese emperors, shōguns, and samurai each had their own personal seal pressed onto edicts and other public documents to show authenticity and authority. Even today Japanese citizens' companies regularly use name seals for the signing of a contract and other important paperwork.[3] Baiwen seal Zhuwen seal Zhubaiwen Xiangjianyin Zhuwen on right side, Baiwen on left side Baiwen on right side, Zhuwen on left side The Chinese emperors, their families and officials used large seals known as xǐ (玺; 璽), later renamed bǎo (宝; 寶; 'treasure'), which corresponds to the Great Seals of Western countries. These were usually made of jade (although hard wood or precious metal could also be used), and were originally square in shape. They were changed to a rectangular form during the Song dynasty, but reverted to square during the Qing dynasty. The most important of these seals was the Heirloom Seal of the Realm, which was created by the First Emperor of China, Qin Shi Huang, and was seen as a legitimising device embodying or symbolising the Mandate of Heaven. The Heirloom Seal was passed down through several dynasties, but had been lost by the beginning of the Ming dynasty. This partly explains the Qing emperors' obsession with creating numerous imperial seals - for the emperors' official use alone the Forbidden City in Beijing has a collection of 25 seals - in order to reduce the significance of the Heirloom Seal. These seals typically bore the titles of the offices, rather than the names of the owners. Different seals could be used for different purposes: for example, the Qianlong Emperor had a number of informal appreciation seals (simplified Chinese: 乾隆御览之宝; traditional Chinese: 乾隆御覽之寶; pinyin: Qiánlóng yùlǎn zhī bǎo; lit.: 'Seal(s) for [use during] the Qiánlóng emperor's inspection') used on select paintings in his collection. The most popular style of script for government seals in the imperial eras of China (from the Song dynasty to Qing dynasty) is the Nine-fold Script (九叠文; 九疊文; jiǔdiéwén), a highly stylised script which is unreadable to the untrained. The government of the Republic of China (Taiwan) has continued to use traditional square seals of up to about 13 centimetres each side, known by a variety of names depending on the user's hierarchy. Part of the inaugural ceremony for the President of the Republic of China includes bestowing on him the Seal of the Republic of China and the Seal of Honor. In the People's Republic of China, the seal of the Central People's Government from 1949 to 1954[4] was a square, bronze seal with side lengths of 9 centimetres. The inscription reads "Seal of the Central People's Government of the People's Republic of China". Notably, the seal uses the relatively modern Song typeface rather than the more ancient seal scripts, and the seal is called a yìn (印), not a xǐ (玺; 璽). Government seals in the People's Republic of China today are usually circular in shape, and have a five-pointed star in the centre of the circle. The name of the governmental institution is arranged around the star in a semicircle. There are many classes of personal seals. Denotes the person's name. Are the equivalent of today's email signature, and can contain the person's personal philosophy or literary inclination. These can be any shape, ranging from ovals to dragon-shaped. Carry the name of the person's private studio 書齋, which most literati in ancient China had, although probably in lesser forms. These are more or less rectangular in shape. There are two types of seal paste depending on what base material they are made of. The standard colour is vermilion red (or lighter or darker shades of red) but other colours can be used such as black, navy, etc. for specific purposes. Plant-based paste tends to dry more quickly than silk-based pastes because the plant extract does not hold onto the oil as tightly as silk. Depending on the paper used, plant pastes can dry in 10 to 15 minutes. The more absorbent the paper is, the faster it dries as the paper absorbs most of the oil. Also, plant pastes tend to smudge more easily than silk pastes due to the loose binding agent. The paste is kept covered after it has been used, in its original container (be it plastic or ceramic). It is kept in an environment away from direct sunlight and away from intense heat to prevent it from drying out. The paste for silk based pastes need to be stirred with a spatula every month or so to avoid the oil sinking down and drying out the paste as well as to prepare it for use. A good paste would produce a clear impression in one go; if the impression is not clear requiring further impressions then it indicates that the paste is either too dry or the cinnabar has been depleted. When the seal is pressed onto the printing surface, the procedure differs according to plant or silk based paste. For silk based paste, the user applies pressure, often with a specially made soft, flat surface beneath the paper. For plant based paste, the user simply applies light pressure. As lifting the seal vertically away from its imprint may rip or damage paper, the seal is usually lifted off one side at a time, as if bent off from the page. After this, the image may be blotted with a piece of paper to make it dry faster, although this may smudge it. Usually there needs to be a pile of soft felt or paper under the paper to be imprinted for a clear seal impression. Many people in China possess a personal name seal. Artists, scholars, collectors and intellectuals may possess a full set of name seals, leisure seals, and studio seals. A well-made seal made from semi-precious stones can cost between 400 and 4000 yuan. Seals are still used for official purposes in a number of contexts. When collecting parcels or registered post, the name seal serves as an identification, akin to a signature. In banks, traditionally the method of identification was also by a seal. Seals remain the customary form of identification on cheques in mainland China and Taiwan. Today, personal identification is often by a hand signature accompanied by a seal imprint. Seals can serve as identification with signatures because they are difficult to forge (when compared to forging a signature) and only the owner has access to his own seal. Seals are also often used on Chinese calligraphy works and Chinese paintings, usually imprinted in such works in the order (from top to bottom) of name seal, leisure seal(s), then studio seal. Owners or collectors of paintings or books will often add their own studio seals to pieces they have collected. This practice is an act of appreciation towards the work. Some artworks have had not only seals but inscriptions of the owner on them; for example, the Qianlong Emperor had as many as 20 different seals for use with inscriptions on paintings he collected. Provided that it is tastefully done (for example, not obscuring the body of the painting, appropriate inscription, fine calligraphy, etc.), this practice does not devalue the painting but could possibly enhance it by giving it further provenance, especially if it is a seal of a famous or celebrated individual who possessed the work at some point. Seals are usually carved by specialist seal carvers, or by the users themselves. Specialist carvers will carve the user's name into the stone in one of the standard scripts and styles described above, usually for a fee. On the other hand, some people take to carving their own seals using soapstone and fine knives, which are widely available and is cheaper than paying a professional for expertise, craft and material. Results vary, but it is possible for individuals to carve perfectly legitimate seals for themselves. As a novelty souvenir, seal carvers also ply tourist business at Chinatowns and tourist destinations in China. They often carve on-the-spot or translations of foreign names on inexpensive soapstone, sometimes featuring Roman characters. Though such seals can be functional, they are typically nothing more than curios and may be inappropriate for serious use and could actually devalue or deface serious works of art. Determining which side of the seal should face up may be done in a number of ways: if there is a carving on top, the front should face the user; if there is an inscription on the side, it should face to the left of the user; if there is a dot on the side, it should face away from the user. Once seals are used, as much paste as possible is wiped from the printing surface and off the edges with a suitable material. The seals are kept in a constant environment, especially seals made of sandalwood or black ox horn. Tall thin seals are best kept on their sides, in case they should wobble and fall down. More important seals, such as authority and society seals are encased or wrapped in a golden silk cloth to add more protection. In Hong Kong, seals have fallen out of general use, as signatures are often required. In the past, seals have been used by businesses on documents related to transactions. In addition, seals have been used in lieu of a signature for the city's illiterate population.[5] In Japan, seals in general are referred to as inkan (印鑑) or hanko (判子).[6] Inkan is the most comprehensive term; hanko tends to refer to seals used in less important documents. The first evidence of writing in Japan is a hanko dating from AD 57, made of solid gold given to the ruler of Nakoku by Emperor Guangwu of Han, called King of Na gold seal.[7] At first, only the Emperor and his most trusted vassals held hanko, as they were a symbol of the Emperor's authority. Noble people began using their own personal hanko after 750, and samurai began using them sometime during the Feudal Period. Samurai were permitted exclusive use of red ink. After modernization began in 1870, hanko came into general use throughout Japanese society. Government offices and corporations usually have inkan specific to their bureau or company and follow the general rules outlined for jitsuin with the following exceptions. In size, they are comparatively large, measuring 2 to 4 inches (5.1 to 10.2 cm) across. Their handles are often extremely ornately carved with friezes of mythical beasts or hand-carved hakubun inscriptions that might be quotes from literature, names and dates, or original poetry. The Privy Seal of Japan is an example; weighing over 3.55 kg and measuring 9.09 cm in size, it is used for official purposes by the Emperor. Some seals have been carved with square tunnels from handle to underside, so that a specific person can slide their own inkan into the hollow, thus signing a document with both their name and the business's (or bureau's) name. These seals are usually stored in jitsuin-style boxes under high security except at official ceremonies, at which they are displayed on extremely ornate stands or in their boxes. For personal use, there are at least four kinds of seals. In order from most formal/official to least, they are jitsuin, ginkō-in, mitome-in, and gagō-in.[6] A jitsuin (実印) is an officially registered seal. A registered seal is needed to conduct business and other important or legally binding events. A jitsuin is used when purchasing a vehicle, marrying, purchasing land, and so on. The size, shape, material, decoration, and lettering style of jitsuin are closely regulated by law. For example, in Hiroshima, a jitsuin is expected to be roughly 1⁄2 to 1 inch (1.3 to 2.5 cm), usually square or (rarely) rectangular but never round, irregular, or oval. It must contain the individual's full family and given name, without abbreviation. The lettering must be red with a white background (shubun), with roughly equal width lines used throughout the name. The font must be one of several based on ancient historical lettering styles found in metal, woodcarving, and so on. Ancient forms of ideographs are commonplace. A red perimeter must entirely surround the name, and there should be no other decoration on the underside (working surface) of the seal. The top and sides (handle) of the seal may be decorated in any fashion from completely undecorated to historical animal motifs to dates, names, and inscriptions. Throughout Japan, rules governing jitsuin design are so stringent and each design is unique so the vast majority of people entrust the creation of their jitsuin to a professional, paying upward of US$20 and more often closer to US$100, and using it for decades. People desirous of opening a new chapter in their lives—say, following a divorce, death of a spouse, a long streak of bad luck, or a change in career—will often have a new jitsuin made. The material is usually a high quality hard stone or, far less frequently, deerhorn, soapstone, or jade. It is sometimes carved by machine. When it's carved by hand, an intō ("seal-engraving blade"), a mirror, and a small specialized wooden vice are used. An intō is a flat-bladed pencil-sized chisel, usually round or octagonal in cross-section and sometimes wrapped in string to give the handle a non-slip surface. The intō is held vertically in one hand, with the point projecting from one's fist on the side opposite one's thumb. New, modern intō range in price from less than US$1 to US$100. The jitsuin is kept in a very secure place such as a bank vault or hidden carefully in one's home. They're usually stored in thumb-sized rectangular boxes made of cardboard covered with heavily embroidered green fabric outside and red silk or red velvet inside, held closed by a white plastic or deerhorn splinter tied to the lid and passed through a fabric loop attached to the lower half of the box. Because of the superficial resemblance to coffins, they're often called "coffins" in Japanese by enthusiasts and hanko boutiques. The paste is usually stored separately. A ginkō-in (銀行印) is used specifically for banking; ginkō means "bank". A person's savings account passbook contains an original impression of the ginkō-in alongside a bank employee's seal. Rules for the size and design vary somewhat from bank to bank; generally, they contain a Japanese person's full name. A Westerner may be permitted to use a full family name with or without an abbreviated given name, such as "Smith", "Bill Smith", "W Smith" or "Wm Smith" in place of "William Smith". The lettering can be red or white, in any font, and with artistic decoration. Most people have them custom-made by professionals or make their own by hand, since mass-produced ginkō-in offer no security. They are wood or stone and carried about in a variety of thumb-shape and -size cases resembling cloth purses or plastic pencil cases. They are usually hidden carefully in the owner's home. Banks always provide stamp pads or ink paste, in addition to dry cleaning tissues. The banks also provide small plastic scrubbing surfaces similar to small patches of red artificial grass. These are attached to counters and used to scrub the accumulated ink paste from the working surface of customers' seals. A mitome-in (認印) is a moderately formal seal typically used for signing for postal deliveries, signing utility bill payments, signing internal company memos, confirming receipt of internal company mail, and other low-security everyday functions. Mitome-in are commonly stored in low-security, high-utility places such as office desk drawers and in the anteroom (genkan) of a residence. A mitome-in's form is governed by far fewer customs than jitsuin and ginkō-in. However, mitome-in adhere to a handful of strongly observed customs. The size is the attribute most strongly governed by social custom. It is usually the size of an American penny or smaller. A male's is usually slightly larger than a female's, and a junior employee's is always smaller than his bosses' and his senior co-workers', in keeping with office social hierarchy. The mitome-in always has the person's family name and usually does not have the person's given name (shita no namae). They are often round or oval, but square ones are not uncommon, and rectangular ones are not unheard-of. They are always geometric figures. They can have red lettering on a blank field (shubun) or the opposite (hakubun). Borderlines around their edges are optional. Plastic mitome-in in popular Japanese names can be obtained from stationery stores for less than US$1, though ones made from inexpensive stone are also very popular. Inexpensive prefabricated seals are called sanmonban (三文判). Prefabricated rubber stamps are unacceptable for business purposes. Mitome-in and lesser seals are usually stored in inexpensive plastic cases, sometimes with small supplies of red paste or a stamp pad included. Most Japanese also have a far less formal seal used to sign personal letters or initial changes in documents; this is referred to by the broadly generic term hanko. They often display only a single hiragana, kanji ideograph, or katakana character carved in it. They are as often round or oval as they are square. They vary in size from 0.5-to-1.5-centimetre wide (0.20 to 0.59 in); women's tend to be small. Gagō-in (雅号印) are used by graphic artists to both decorate and sign their work. The practice goes back several hundred years. The signatures are frequently pen names or nicknames; the decorations are usually favorite slogans or other extremely short phrases. A gago in can be any size, design, or shape. Irregular naturally occurring outlines and handles, as though a river stone were cut in two, are commonplace. The material may be anything, though in modern times soft stone is the most common and metal is rare. Traditionally, inkan and hanko are engraved on the end of a finger-length stick of stone, wood, bone, or ivory, with a diameter between 25 and 75 millimetres (0.98 and 2.95 in). Their carving is a form of calligraphic art. Foreign names may be carved in rōmaji, katakana, hiragana, or kanji. Inkan for standard Japanese names may be purchased prefabricated. Almost every stationery store, discount store, large book store, and department store carries small do-it-yourself kits for making hanko. These include instructions, hiragana fonts written forward and in mirror-writing (as they'd appear on the working surface of a seal), a slim in tou chisel, two or three grades of sandpaper, slim marker pen (to draw the design on the stone), and one to three mottled, inexpensive, soft square green finger-size stones. In modern Japan, most people have several inkan. A certificate of authenticity is required for any hanko used in a significant business transaction. Registration and certification of an inkan may be obtained in a local municipal office (e.g., city hall). There, a person receives a "certificate of seal impression" known as inkan tōroku shōmei-sho (印鑑登録証明書). The increasing ease with which modern technology allows hanko fraud is beginning to cause some concern that the present system will not be able to survive. Signatures are not used for most transactions, but in some cases, such as signing a cell phone contract, they may be used, sometimes in addition to a stamp from a mitome-in. For these transactions, a jitsuin is too official, while a mitome-in alone is insufficient, and thus signatures are used.[8][9] Chinese style seals were also utilized by the Ryūkyū Kingdom.[10] The seal was first introduced to Korea in approximately 2nd century BC. The remaining oldest record of its usage in Korea is that kings of Buyeo used a royal seal (oksae: 옥새, 玉璽) which bore the inscription of Seal of the King of Ye (濊王之印, 예왕지인). The use of seals became popular during the Three Kingdoms of Korea period. In the case of State Seals in monarchic Korea, there were two types in use: Gugin (국인, 國印) which was conferred by the Emperor of China to Korean kings, with the intent of keeping relations between two countries as brothers (Sadae). This was used only in communications with China and for the coronation of kings. Others, generally called eobo (어보, 御寶) or eosae (어새, 御璽), are used in foreign communications with countries other than China, and for domestic uses. With the declaration of establishment of Republic of Korea in 1948, its government created a new State Seal, guksae (국새, 國璽) and it is used in promulgation of constitution, designation of cabinet members and ambassadors, conference of national orders and important diplomatic documents. Seals were also used by government officials in documents. These types of seals were called gwanin (관인, 官印) and it was supervised by specialist officials. In modern Korea, the use of seals is still common. Most Koreans have personal seals, and every government agency and commercial corporation has its own seals to use in public documents. While signing is also accepted, many Koreans think it is more formal to use seals in public documents. In 2008, the Constitutional Court of South-Korea upheld a Supreme court judgement that a signed and handwritten will which lacked a registered seal was invalid.[11] Personal seals (Korean: 도장; RR: dojang) in Korea can be classified by their legal status. Ingam (인감, 印鑑) or sirin (실인, 實印), meaning registered seal, is the seal which is registered to local office. By registering the seal, a person can issue a "certificate of seal registration" (Korean: 인감증명서; Hanja: 印鑑證明書; RR: ingam-jungmyeong-seo) which is a required document for most significant business transactions and civil services. The legal system of registered seals was introduced by the Japanese colonial government in 1914. While it was scheduled to be completely replaced by the electronic certification system in 2013 in order to counter fraud, ingam still remains as an official means of verification for binding legal agreement and identification.[12] The government did pass the 'Act on Confirmation, etc. of Personal Signature (본인서명사실 확인 등에 관한 법률)' in 2012, which allows pre-registered handwritten signatures to have the same legal effect as ingam.[13] While ingam is used in important business, other dojangs are used in everyday purpose such as less-significant official transactions. Thus most Koreans have more than two seals. In traditional arts, like in China and Japan, an artist of Chinese calligraphy and paintings would use their seals (generally leisure seals and studio seals) to identify his/her work. These types of seals were called Nakkwan (낙관, 落款). As seal-carving itself was considered a form of art, many artists carved their own seals. Seals of Joseon-period calligraphist and natural historian Kim Jung-hee (aka Wandang or Chusa) are considered as antiquity. Korean seals are made of wood, jade, or sometimes ivory for more value. State Seals were generally made of gold or high-quality jade. Rare cases of bronze or steel seals exist. The Philippines also had a sealing culture prior to Spanish colonization. However, when the Spaniards succeeded in colonizing the islands, they abolished the practice and burned all documents they captured from the natives while forcefully establishing a Roman Catholic-based rule. Records on Philippine seals were forgotten until in the 1970s when actual ancient seals made of ivory were found in an archaeological site in Butuan. The seal, now known as the Butuan Ivory Seal, has been declared as a National Cultural Treasure. The seal is inscribed with the word "Butwan" through a native suyat script. The discovery of the seal proved the theory that pre-colonial Filipinos, or at least in coastal areas, used seals on paper. Before the discovery of the seal, it was only thought that ancient Filipinos used bamboo, metal, bark, and leaves for writing. The presence of paper documents in the classical era of the Philippines is also backed by a research of Dr. H. Otley Beyer, father of Philippine anthropology, stating that Spanish friars 'boasted' about burning ancient Philippine documents with suyat inscriptions, one of the reasons why ancient documents from the Philippines are almost non-existent in present time. The ivory seal is now housed at the National Museum of the Philippines. Nowadays, younger generations are trying to revive the usage of seals, notably in signing pieces of art such as drawings, paintings, calligraphy, and literary works.[14] The seal is used to a lesser extent in Vietnam by authorised organisations and businesses, and also traditional Vietnamese artists. It was more common in Vietnam prior to French rule, when thereafter the practice of signature became a commonality, although western-like signatures are usually seen as having less authority in a corporate situation.[15] While Chinese style seals are typically used in China, Japan, and Korea, they are occasionally used outside East Asia. For example, the rulers of the Ilkhanate, a Mongol khanate established by Hulagu Khan in Persia, used seals containing Chinese characters in each of their diplomatic letters, such as the letter from Arghun to French King Philip IV and the letter from Ghazan to Pope Boniface VIII. These seals were sent by the emperors of the Yuan dynasty, a ruling dynasty of China and Mongolia, especially by Kublai Khan and his successor Emperor Chengzong. Social mediaSocial media are interactive computer-mediated technologies that facilitate the creation or sharing of information, ideas, career interests and other forms of expression via virtual communities and networks.[1][2] The variety of stand-alone and built-in Social media services currently available introduces challenges of definition; however, there are some common features:[2] Users usually access Social media services via web-based apps on desktops and laptops, or download services that offer Social media functionality to their mobile devices (e.g., smartphones and tablets). As users engage with these electronic services, they create highly interactive platforms through which individuals, communities, and organizations can share, co-create, discuss, participate and modify user-generated content or self-curated content posted online. Networks formed through Social media change the way groups of people interact and communicate or stand with the votes. They "introduce substantial and pervasive changes to communication between organizations, communities, and individuals."[1] These changes are the focus of the emerging fields of technoself studies. Social media differ from paper-based media (e.g., magazines and newspapers) and traditional electronic media such as TV broadcasting, Radio broadcasting in many ways, including quality,[5] reach, frequency, interactivity, usability, immediacy, and performance. Social media outlets operate in a dialogic transmission system (many sources to many receivers).[6] This is in contrast to traditional media which operates under a mono-logic transmission model (one source to many receivers), such as a newspaper which is delivered to many subscribers, or a radio station which broadcasts the same programs to an entire city. Some of the most popular Social media websites, with over 100 million registered users, include Facebook (and its associated Facebook Messenger), YouTube, TikTok, WeChat, Instagram, QQ, QZone, Weibo, Twitter, Tumblr, Telegram, Baidu Tieba, LinkedIn, WhatsApp, LINE, Snapchat, Pinterest, Viber, VK, Reddit, Discord and more. Observers have noted a wide range of positive and negative impacts of Social media use. Social media can help to improve an individual's sense of connectedness with real or online communities and can be an effective communication (or marketing) tool for corporations, entrepreneurs, non-profit organizations, advocacy groups, political parties, and governments. Social media may have roots in the 1840s introduction of the telegraph, which connected the United States.[7] The PLATO system launched in 1960, which was developed at the University of Illinois and subsequently commercially marketed by Control Data Corporation, offered early forms of Social media features with 1973-era innovations such as Notes, PLATO's message-forum application; TERM-talk, its instant-messaging feature; Talkomatic, perhaps the first online chat room; News Report, a crowd-sourced online newspaper and blog; and Access Lists, enabling the owner of a note file or other application to limit access to a certain set of users, for example, only friends, classmates, or co-workers. ARPANET, which first came online in 1967, had by the late 1970s developed a rich cultural exchange of non-government/business ideas and communication, as evidenced by the network etiquette (or "netiquette") described in a 1982 handbook on computing at MIT's Artificial Intelligence Laboratory.[8] ARPANET evolved into the Internet following the publication of the first Transmission Control Protocol (TCP) specification, .mw-parser-output cite.citation{font-style:inherit}.mw-parser-output .citation q{quotes:"\"""\"""'""'"}.mw-parser-output .id-lock-free a,.mw-parser-output .citation .cs1-lock-free a{background-image:url("//upload.wikimedia.org/wikipedia/commons/thumb/6/65/Lock-green.svg/9px-Lock-green.svg.png");background-image:linear-gradient(transparent,transparent),url("//upload.wikimedia.org/wikipedia/commons/6/65/Lock-green.svg");background-repeat:no-repeat;background-size:9px;background-position:right .1em center}.mw-parser-output .id-lock-limited a,.mw-parser-output .id-lock-registration a,.mw-parser-output .citation .cs1-lock-limited a,.mw-parser-output .citation .cs1-lock-registration a{background-image:url("//upload.wikimedia.org/wikipedia/commons/thumb/d/d6/Lock-gray-alt-2.svg/9px-Lock-gray-alt-2.svg.png");background-image:linear-gradient(transparent,transparent),url("//upload.wikimedia.org/wikipedia/commons/d/d6/Lock-gray-alt-2.svg");background-repeat:no-repeat;background-size:9px;background-position:right .1em center}.mw-parser-output .id-lock-subscription a,.mw-parser-output .citation .cs1-lock-subscription a{background-image:url("//upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Lock-red-alt-2.svg/9px-Lock-red-alt-2.svg.png");background-image:linear-gradient(transparent,transparent),url("//upload.wikimedia.org/wikipedia/commons/a/aa/Lock-red-alt-2.svg");background-repeat:no-repeat;background-size:9px;background-position:right .1em center}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-ws-icon a{background-image:url("//upload.wikimedia.org/wikipedia/commons/thumb/4/4c/Wikisource-logo.svg/12px-Wikisource-logo.svg.png");background-image:linear-gradient(transparent,transparent),url("//upload.wikimedia.org/wikipedia/commons/4/4c/Wikisource-logo.svg");background-repeat:no-repeat;background-size:12px;background-position:right .1em center}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:inherit;padding:inherit}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{font-size:100%}.mw-parser-output .cs1-maint{display:none;color:#33aa33;margin-left:0.3em}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em}.mw-parser-output .citation .mw-selflink{font-weight:inherit}RFC 675 (Specification of Internet Transmission Control Program), written by Vint Cerf, Yogen Dalal and Carl Sunshine in 1974.[9] This became the foundation of Usenet, conceived by Tom Truscott and Jim Ellis in 1979 at the University of North Carolina at Chapel Hill and Duke University, and established in 1980. A precursor of the electronic bulletin board system (BBS), known as Community Memory, had already appeared by 1973. True electronic bulletin board systems arrived with the Computer Bulletin Board System in Chicago, which first came online on February 16, 1978. Before long, most major cities had more than one BBS running on TRS-80, Apple II, Atari, IBM PC, Commodore 64, Sinclair, and similar personal computers. The IBM PC was introduced in 1981, and subsequent models of both Mac computers and PCs were used throughout the 1980s. Multiple modems, followed by specialized telecommunication hardware, allowed many users to be online simultaneously. Compuserve, Prodigy and AOL were three of the largest BBS companies and were the first to migrate to the Internet in the 1990s. Between the mid-1980s and the mid-1990s, BBSes numbered in the tens of thousands in North America alone.[10] Message forums (a specific structure of Social media) arose with the BBS phenomenon throughout the 1980s and early 1990s. When the World Wide Web (WWW, or "the web") was added to the Internet in the mid-1990s, message forums migrated to the web, becoming Internet forums, primarily due to cheaper per-person access as well as the ability to handle far more people simultaneously than telco modem banks. Digital imaging and semiconductor image sensor technology facilitated the development and rise of Social media.[11] Advances in metal-oxide-semiconductor (MOS) semiconductor device fabrication, reaching smaller micron and then sub-micron levels during the 1980s–1990s, led to the development of the NMOS (n-type MOS) active-pixel sensor (APS) at Olympus in 1985,[12][13] and then the complementary MOS (CMOS) active-pixel sensor (CMOS sensor) at NASA's Jet Propulsion Laboratory (JPL) in 1993.[12][14] CMOS sensors enabled the mass proliferation of digital cameras and camera phones, which bolstered the rise of Social media.[11] An important feature of Social media is digital media data compression,[15][16] due to the impractically high memory and bandwidth requirements of uncompressed media.[17] The most important compression algorithm is the discrete cosine transform (DCT),[17][18] a lossy compression technique that was first proposed by Nasir Ahmed in 1972.[19] DCT-based compression standards include the H.26x and MPEG video coding standards introduced from 1988 onwards,[18] and the JPEG image compression standard introduced in 1992.[20][15] JPEG was largely responsible for the proliferation of digital images and digital photos which lie at the heart of Social media,[15] and the MPEG standards did the same for digital video content on Social media.[16] The JPEG image format is used more than a billion times on social networks every day, as of 2014.[21][22] GeoCities was one of the World Wide Web's earliest social networking websites, appearing in November 1994, followed by Classmates in December 1995 and SixDegrees in May 1997. According to CBS news, SixDegrees is "widely considered to be the very first social networking site", as it included "profiles, friends lists and school affiliations" that could be used by registered users.[23] Open Diary was launched in October 1998; LiveJournal in April 1999; Ryze in October 2001; Friendster in March 2003; the corporate and job-oriented site LinkedIn in May 2003; hi5 in June 2003; MySpace in August 2003; Orkut in January 2004; Facebook in February 2004; YouTube in February 2005; Yahoo! 360° in March 2005; Bebo in July 2005; the text-based service Twitter, in which posts, called "tweets", were limited to 140 characters, in July 2006; Tumblr in February 2007; and Google+ in July 2011.[24][25][26] The variety of evolving stand-alone and built-in Social media services makes it challenging to define them.[2] However, marketing and Social media experts broadly agree that Social media includes the following 13 types of Social media:[27] The idea that Social media are defined simply by their ability to bring people together has been seen as too broad, as this would suggest that fundamentally different technologies like the telegraph and telephone are also Social media.[28] The terminology is unclear, with some early researchers referring to Social media as social networks or social networking services in the mid 2000s.[4] A more recent paper from 2015[2] reviewed the prominent literature in the area and identified four common features unique to then-current Social media services: In 2019, Merriam-Webster defined "Social media" as "forms of electronic communication (such as websites for social networking and microblogging) through which users create online communities to share information, ideas, personal messages, and other content (such as videos)"[29] The development of Social media started off with simple platforms such as sixdegrees.com.[30] Unlike instant messaging clients, such as ICQ and AOL's AIM, or chat clients like IRC, iChat or Chat Television, sixdegrees.com was the first online business that was created for real people, using their real names. The first social networks were short-lived, however, because their users lost interest. The Social Network Revolution has led to the rise of networking sites. Research[31] shows that the audience spends 22% of their time on social networks, thus proving how popular Social media platforms have become. This increase is because of the widespread daily use of smartphones.[32] Social media are used to document memories, learn about and explore things, advertise oneself and form friendships as well as the growth of ideas from the creation of blogs, podcasts, videos, and gaming sites.[33] Networked individuals create, edit, and manage content in collaboration with other networked individuals. This way they contribute to expanding knowledge. Wikis are examples of collaborative content creation. Mobile Social media refer to the use of Social media on mobile devices such as smartphones and tablet computers. Mobile Social media are a useful application of mobile marketing because the creation, exchange, and circulation of user-generated content can assist companies with marketing research, communication, and relationship development.[34] Mobile Social media differ from others because they incorporate the current location of the user (location-sensitivity) or the time delay between sending and receiving messages (time-sensitivity). According to Andreas Kaplan, mobile Social media applications can be differentiated among four types:[34] Some Social media sites have potential for content posted there to spread virally over social networks. The term is an analogy to the concept of viral infections, which can spread rapidly from person to person. In a Social media context, content or websites that are "viral" (or which "go viral") are those with a greater likelihood that users will reshare content posted (by another user) to their social network, leading to further sharing. In some cases, posts containing popular content or fast-breaking news have been rapidly shared and reshared by a huge number of users. Many Social media sites provide specific functionality to help users reshare content, such as Twitter's retweet button, Pinterest's pin function, Facebook's share option or Tumblr's reblog function. Businesses have a particular interest in viral marketing tactics because a viral campaign can achieve widespread advertising coverage (particularly if the viral reposting itself makes the news) for a fraction of the cost of a traditional marketing campaign, which typically uses printed materials, like newspapers, magazines, mailings, and billboards, and television and radio commercials. Nonprofit organizations and activists may have similar interests in posting content on Social media sites with the aim of it going viral. A popular component and feature of Twitter is retweeting. Twitter allows other people to keep up with important events, stay connected with their peers, and can contribute in various ways throughout Social media.[35] When certain posts become popular, they start to get retweeted over and over again, becoming viral. Hashtags can be used in tweets, and can also be used to take count of how many people have used that hashtag. Social media can enable companies to get in the form of greater market share and increased audiences.[36] Internet bots have been developed which facilitate Social media marketing. Bots are automated programs that run over the Internet.[37] Chatbots and social bots are programmed to mimic natural human interactions such as liking, commenting, following, and unfollowing on Social media platforms.[38] A new industry of bot providers has been created.[39] Social bots and chatbots have created an analytical crisis in the marketing industry[40] as they make it difficult to differentiate between human interactions and automated bot interactions.[40] Some bots are negatively affecting their marketing data causing a "digital cannibalism" in Social media marketing. Additionally, some bots violate the terms of use on many social mediums such as Instagram, which can result in profiles being taken down and banned.[41] "Cyborgs", a combination of a human and a bot,[42][43] are used to spread fake news or create a marketing "buzz".[44] Cyborgs can be bot-assisted humans or human-assisted bots.[45] An example is a human who registers an account for which they set automated programs to post, for instance, tweets, during their absence.[45] From time to time, the human participates to tweet and interact with friends. Cyborgs make it easier to spread fake news, as it blends automated activity with human input.[45] When the automated accounts are publicly identified, the human part of the cyborg is able to take over and could protest that the account has been used manually all along. Such accounts try to pose as real people; in particular, the number of their friends or followers should be resembling that of a real person. There has been rapid growth in the number of U.S. patent applications that cover new technologies related to Social media, and the number of them that are published has been growing rapidly over the past five years. There are now over 2000 published patent applications.[47] As many as 7000 applications may be currently on file including those that haven't been published yet. Only slightly over 100 of these applications have issued as patents, however, largely due to the multi-year backlog in examination of business method patents, patents which outline and claim new methods of doing business.[48] According to Statista, in 2019, it is estimated that there will be around 2.77 billion Social media users around the globe, up from 2.46 billion in 2017.[49] In 2020, there were 3.8 billion Social media users.[50] The following list of the leading social networks shows the number of active users as of April 2020 (figures for Baidu Tieba, LinkedIn, Viber and Discord are from October 2019).[51] (in millions) According to a survey conducted by Pew Research in 2018, Facebook and YouTube dominate the Social media landscape, as notable majorities of U.S. adults use each of these sites. At the same time, younger Americans (especially those ages 18 to 24) stand out for embracing a variety of platforms and using them frequently. Some 78% of 18 - 24-year-old youngsters use Snapchat, and a sizeable majority of these users (71%) visit the platform multiple times per day. Similarly, 71% of Americans in this age group now use Instagram and close to half (45%) are Twitter users. However, Facebook remains the primary platform for most Americans. Roughly two-thirds of U.S. adults (68%) now report that they are Facebook users, and roughly three-quarters of those users access Facebook on a daily basis. With the exception of those 65 and older, a majority of Americans across a wide range of demographic groups now use Facebook.[52] After this rapid growth, the number of new U.S. Facebook accounts created has plateaued, with not much observable growth in the 2016-18 period.[53] Governments may use Social media to (for example):[54] The high distribution of Social media in the private environment drives companies to deal with the application possibilities of Social media on Marketplace actors can use social-media tools for marketing research, communication, sales promotions/discounts, informal employee-learning/organizational development, relationship development/loyalty programs,[34] and e-Commerce. Often Social media can become a good source of information and/or explanation of industry trends for a business to embrace change. Trends in social-media technology and usage change rapidly, making it crucial for businesses to have a set of guidelines that can apply to many Social media platforms.[56] Companies are increasingly[quantify] using social-media monitoring tools to monitor, track, and analyze online conversations on the Web about their brand or products or about related topics of interest. This can prove useful in public relations management and advertising-campaign tracking, allowing analysts to measure return on investment for their Social media ad spending, competitor-auditing, and for public engagement. Tools range from free, basic applications to subscription-based, more in-depth tools. Financial industries utilize the power of Social media as a tool for analysing sentiment of financial markets. These range from the marketing of financial products, gaining insights into market sentiment, future market predictions, and as a tool to identify insider trading.[57] Social media becomes effective through a process called[by whom?] "building social authority".[58] One of the foundation concepts in Social media has become[when?] that one cannot completely control one's message through Social media but rather one can simply begin to participate in the "conversation" expecting that one can achieve a significant influence in that conversation.[59] Social media "mining" is a type of data mining, a technique of analyzing data to detect patterns. Social media mining is a process of representing, analyzing, and extracting actionable patterns from data collected from people's activities on Social media. Google mines data in many ways including using an algorithm in Gmail to analyze information in emails. This use of the information will then affect the type of advertisements shown to the user when they use Gmail. Facebook has partnered with many data mining companies such as Datalogix and BlueKai to use customer information for targeted advertising.[60] Ethical questions of the extent to which a company should be able to utilize a user's information have been called "big data".[60] Users tend to click through Terms of Use agreements when signing up on Social media platforms, and they do not know how their information will be used by companies. This leads to questions of privacy and surveillance when user data is recorded. Some Social media outlets have added capture time and Geotagging that helps provide information about the context of the data as well as making their data more accurate. Social media has a range of uses in political processes and activities. Social media have been championed as allowing anyone with an Internet connection to become a content creator[61] and empowering their users.[62] The role of Social media in democratizing media participation, which proponents herald as ushering in a new era of participatory democracy, with all users able to contribute news and comments, may fall short of the ideals, given that many often follow like-minded individuals, as noted by Philip Pond and Jeff Lewis.[63] Online media audience members are largely passive consumers, while content creation is dominated by a small number of users who post comments and write new content.[64]:78 Younger generations are becoming more involved in politics due to the increase of political news posted on Social media. Political campaigns are targeting Millennials online via Social media posts in hope that they will increase their political engagement.[65] Social media was influential in the widespread attention given to the revolutionary outbreaks in the Middle East and North Africa during 2011.[66][67][68] During the Tunisian revolution in 2011, people used Facebook to organize meetings and protests.[61] However, there is debate about the extent to which Social media facilitated this kind of political change.[69] Social media footprints of candidates have grown during the last decade and the 2016 United States Presidential election provides a good example. Dounoucos et al. noted that Twitter use by the candidates was unprecedented during that election cycle.[70] Most candidates in the United States have a Twitter account.[71] The public has also increased their reliance on Social media sites for political information.[70] In the European Union, Social media has amplified political messages.[72] One challenge is that militant groups have begun to see Social media as a major organizing and recruiting tool.[73] The Islamic State of Iraq and the Levant, also known as ISIL, ISIS, and Daesh, has used Social media to promote its cause. In 2014, #AllEyesonISIS went viral on Arabic Twitter.[74] ISIS produces an online magazine named the Islamic State Report to recruit more fighters.[75] Social media platforms have been weaponized by state-sponsored cyber groups to attack governments in the United States, European Union, and Middle East. Although phishing attacks via email are the most commonly used tactic to breach government networks, phishing attacks on Social media rose 500% in 2016.[76] Some employers examine job applicants' Social media profiles as part of the hiring assessment. This issue raises many ethical questions that some consider an employer's right and others consider discrimination. Many Western European countries have already implemented laws that restrict the regulation of Social media in the workplace. States including Arkansas, California, Colorado, Illinois, Maryland, Michigan, Nevada, New Jersey, New Mexico, Utah, Washington, and Wisconsin have passed legislation that protects potential employees and current employees from employers that demand that they provide their usernames and/or passwords for any Social media accounts.[77] Use of Social media by young people has caused significant problems for some applicants who are active on Social media when they try to enter the job market. A survey of 17,000 young people in six countries in 2013 found that 1 in 10 people aged 16 to 34 have been rejected for a job because of online comments they made on Social media websites.[78] It is not only an issue in the workplace but an issue in post-secondary school admissions as well. There have been situations where students have been forced to give up their Social media passwords to school administrators.[79] There are inadequate laws to protect a student's Social media privacy, and organizations such as the ACLU are pushing for more privacy protection, as it is an invasion. They urge students who are pressured to give up their account information to tell the administrators to contact a parent or lawyer before they take the matter any further. Although they are students, they still have the right to keep their password-protected information private.[80] Before Social media,[81] admissions officials in the United States used SAT and other standardized test scores, extra-curricular activities, letters of recommendation, and high school report cards to determine whether to accept or deny an applicant. In the 2010s, while colleges and universities still use these traditional methods to evaluate applicants, these institutions are increasingly accessing applicants' Social media profiles to learn about their character and activities. According to Kaplan, Inc, a corporation that provides higher education preparation, in 2012 27% of admissions officers used Google to learn more about an applicant, with 26% checking Facebook.[82] Students whose Social media pages include offensive jokes or photos, racist or homophobic comments, photos depicting the applicant engaging in illegal drug use or drunkenness, and so on, may be screened out from admission processes. Social media has been used extensively in civil and criminal investigations.[83] It has also been used to assist in searches for missing persons.[84] Police departments often make use of official Social media accounts to engage with the public, publicize police activity, and burnish law enforcement's image;[85][86] conversely, video footage of citizen-documented police brutality and other misconduct has sometimes been posted to Social media.[86] In the United States U.S. Immigration and Customs Enforcement identifies and track individuals via Social media, and also has apprehended some people via Social media based sting operations.[87] U.S. Customs and Border Protection (also known as CPB) and the United States Department of Homeland Security use Social media data as influencing factors during the visa process, and continue to monitor individuals after they have entered the country.[88] CPB officers have also been documented performing searches of electronics and Social media behavior at the border, searching both citizens and non-citizens without first obtaining a warrant.[88] Social media comments and images are being used in a range of court cases including employment law, child custody/child support and insurance disability claims. After an Apple employee criticized his employer on Facebook, he was fired. When the former employee sued Apple for unfair dismissal, the court, after seeing the man's Facebook posts, found in favor of Apple, as the man's Social media comments breached Apple's policies.[89] After a heterosexual couple broke up, the man posted "violent rap lyrics from a song that talked about fantasies of killing the rapper's ex-wife" and made threats against him. The court found him guilty and he was sentenced to jail.[89] In a disability claims case, a woman who fell at work claimed that she was permanently injured; the employer used her Social media posts of her travels and activities to counter her claims.[89] Courts do not always admit Social media evidence, in part because screenshots can be faked or tampered with.[90] Judges are taking emojis into account to assess statements made on Social media; in one Michigan case where a person alleged that another person had defamed them in an online comment, the judge disagreed, noting that there was an emoji after the comment which indicated that it was a joke.[90] In a 2014 case in Ontario against a police officer regarding alleged assault of a protester during the G20 summit, the court rejected the Crown's application to use a digital photo of the protest that was anonymously posted online, because there was no metadata proving when the photo was taken and it could have been digitally altered.[90] Social media marketing has increased due to the growing active user rates on Social media sites. For example, Facebook currently has 2.2 billion users, Twitter has 330 million active users and Instagram has 800 million users.[91] One of the main uses is to interact with audiences to create awareness of their brand or service, with the main idea of creating a two-way communication system where the audience and/or customers can interact back; providing feedback as just one example.[92] Social media can be used to advertise; placing an advert on Facebook's Newsfeed, for example, can allow a vast number of people to see it or targeting specific audiences from their usage to encourage awareness of the product or brand. Users of Social media are then able to like, share and comment on the advert, becoming message senders as they can keep passing the advert's message on to their friends and onwards.[93] The use of new media put consumers on the position of spreading opinions, sharing experience, and has shift power from organization to consumers for it allows transparency and different opinions to be heard.[94] media marketing has to keep up with all the different platforms. They also have to keep up with the ongoing trends that are set by big influencers and draw many peoples attention. The type of audience a business is going for will determine the Social media site they use.[3] Social media personalities have been employed by marketers to promote products online. Research shows that digital endorsements seem to be successfully targeting Social media users,[95] especially younger consumers who have grown up in the digital age.[96] Celebrities with large Social media followings, such as Kylie Jenner, regularly endorse products to their followers on their Social media pages.[97] In 2013, the United Kingdom Advertising Standards Authority (ASA) began to advise celebrities and sports stars to make it clear if they had been paid to tweet about a product or service by using the hashtag #spon or #ad within tweets containing endorsements. The practice of harnessing Social media personalities to market or promote a product or service to their following is commonly referred to as Influencer Marketing. The Cambridge Dictionary defines an "influencer" as any person (personality, blogger, journalist, celebrity) who has the ability to affect the opinions, behaviors, or purchases of others through the use of Social media.[98] Companies such as fast food franchise Wendy's have used humor to advertise their products by poking fun at competitors such as McDonald's and Burger King.[99] Other companies such as Juul have used hashtags to promote themselves and their products.[100] On Social media, consumers are exposed to the purchasing practices of peers through messages from a peer's account, which may be peer-written. Such messages may be part of an interactive marketing strategy involving modeling, reinforcement, and social interaction mechanisms.[101] A 2011 study focusing on peer communication through Social media described how communication between peers through Social media can affect purchase intentions: a direct impact through conformity, and an indirect impact by stressing product engagement.[101] The study indicated that Social media communication between peers about a product had a positive relationship with product engagement.[101] Signals from Social media are used to assess academic publications,[102] as well as for evaluation of the quality of the Wikipedia articles and their sources.[103] Data from Social media can be also used for different scientific approaches. One of the studies examined how millions of users interact with socially shared news and show that individuals’ choices played a stronger role in limiting exposure to cross-cutting content.[104] Another study found that most of the health science students acquiring academic materials from others through Social media.[105] Massive amounts of data from social platforms allows scientists and machine learning researchers to extract insights and build product features.[106] Using Social media can help to shape patterns of deception in resumes.[107] In the United States, 81% of users look online for news of the weather, first and foremost, with the percentage seeking national news at 73%, 52% for sports news, and 41% for entertainment or celebrity news. According to CNN, in 2010 75% of people got their news forwarded through e-mail or Social media posts, whereas 37% of people shared a news item via Facebook or Twitter.[108] Facebook and Twitter make news a more participatory experience than before as people share news articles and comment on other people's posts. Rainie and Wellman have argued that media making now has become a participation work,[109] which changes communication systems. However, 27% of respondents worry about the accuracy of a story on a blog.[64] From a 2019 poll, Pew Research Center found that Americans are wary about the ways that Social media sites share news and certain content.[110] This wariness of accuracy is on the rise as Social media sites are increasingly exploited by aggregated new sources which stitch together multiple feeds to develop plausible correlations. Hemsley, Jacobson et al. refer to this phenomenon as "pseudoknowledge" which develop false narratives and fake news that are supported through general analysis and ideology rather than facts.[111] Social media as a news source is further questioned as spikes in evidence surround major news events such as was captured in the United States 2016 presidential election.[112] News media and television journalism have been a key feature in the shaping of American collective memory for much of the twentieth century.[113][114] Indeed, since the United States' colonial era, news media has influenced collective memory and discourse about national development and trauma. In many ways, mainstream journalists have maintained an authoritative voice as the storytellers of the American past. Their documentary style narratives, detailed exposes, and their positions in the present make them prime sources for public memory. Specifically, news media journalists have shaped collective memory on nearly every major national event – from the deaths of social and political figures to the progression of political hopefuls. Journalists provide elaborate descriptions of commemorative events in U.S. history and contemporary popular cultural sensations. Many Americans learn the significance of historical events and political issues through news media, as they are presented on popular news stations.[115] However, journalistic influence is growing less important, whereas social networking sites such as Facebook, YouTube and Twitter, provide a constant supply of alternative news sources for users. As social networking becomes more popular among older and younger generations, sites such as Facebook and YouTube, gradually undermine the traditionally authoritative voices of news media. For example, American citizens contest media coverage of various social and political events as they see fit, inserting their voices into the narratives about America's past and present and shaping their own collective memories.[116][117] An example of this is the public explosion of the Trayvon Martin shooting in Sanford, Florida. News media coverage of the incident was minimal until Social media users made the story recognizable through their constant discussion of the case. Approximately one month after the fatal shooting of Trayvon Martin, its online coverage by everyday Americans garnered national attention from mainstream media journalists, in turn exemplifying media activism. In some ways, the spread of this tragic event through alternative news sources parallels that of Emmitt Till – whose murder by lynching in 1955 became a national story after it was circulated in African-American and Communist newspapers. Social media is used to fulfill perceived social needs, but not all needs can be fulfilled by Social media.[118] For example, lonely individuals are more likely to use the Internet for emotional support than those who are not lonely.[119] Sherry Turkle explores these issues in her book Alone Together as she discusses how people confuse Social media usage with authentic communication. She posits that people tend to act differently online and are less afraid to hurt each other's feelings. Additionally, studies on who interacts on the internet have shown that extraversion and openness have a positive relationship with Social media, while emotional stability has a negative sloping relationship with Social media.[120] Some online behaviors can cause stress and anxiety, due to the permanence of online posts, the fear of being hacked, or of universities and employers exploring Social media pages. Turkle also speculates that people are beginning to prefer texting to face-to-face communication, which can contribute to feelings of loneliness.[121] Some researchers have also found that exchanges that involved direct communication and reciprocation of messages correlated with less feelings of loneliness. However, passively using Social media without sending or receiving messages does not make people feel less lonely unless they were lonely to begin with.[122] Checking updates on friends' activities on Social media is associated with the "fear of missing out" (FOMO), the "pervasive apprehension that others might be having rewarding experiences from which one is absent".[123] FOMO is a social anxiety[124] characterized by "a desire to stay continually connected with what others are doing".[123] It has negative influences on people's psychological health and well-being because it could contribute to negative mood and depressed feelings.[125] Concerns have been raised[by whom?] about online "stalking" or "creeping" of people on Social media, which means looking at the person's "timeline, status updates, tweets, and online bios" to find information about them and their activities.[126] While Social media creeping is common, it is considered to be poor form to admit to a new acquaintance or new date that you have looked through his or her Social media posts, particularly older posts, as this will indicate that you were going through their old history.[126] A sub-category of creeping is creeping ex-partners' Social media posts after a breakup to investigate if there is a new partner or new dating; this can lead to preoccupation with the ex, rumination and negative feelings, all of which postpone recovery and increase feelings of loss.[127] Catfishing has become more prevalent since the advent of Social media. Relationships formed with catfish can lead to actions such as supporting them with money and catfish will typically make excuses as to why they cannot meet up or be viewed on camera.[128] According to research from UCLA, teenage brains' reward circuits were more active when teenager's photos were liked by more peers. This has both positive and negative features. Teenagers and young adults befriend people online whom they do not know well. This opens the possibility of a child being influenced by people who engage in risk-taking behavior. When children have several hundred online connections there is no way for parents to know who they are.[129] The more time people spend on Facebook, the less satisfied they feel about their life.[130] Self-presentational theory explains that people will consciously manage their self-image or identity related information in social contexts. When people are not accepted or are criticized online they feel emotional pain.[131] This may lead to some form of online retaliation such as online bullying.[132] Trudy Hui Hui Chua and Leanne Chang's article, "Follow Me and Like My Beautiful Selfies: Singapore Teenage Girls' Engagement in Self-Presentation and Peer Comparison on Social media"[133] states that teenage girls manipulate their self-presentation on Social media to achieve a sense of beauty that is projected by their peers. These authors also discovered that teenage girls compare themselves to their peers on Social media and present themselves in certain ways in effort to earn regard and acceptance, which can actually lead to problems with self-confidence and self-satisfaction.[133] Users also tend to segment their audiences based on the image they want to present, pseudonymity and use of multiple accounts across the same platform remain popular ways to negotiate platform expectations and segOLED An organic light-emitting diode (OLED or Organic LED), also known as an organic EL (organic electroluminescent) diode,[1][2] is a light-emitting diode (LED) in which the emissive electroluminescent layer is a film of organic compound that emits light in response to an electric current. This organic layer is situated between two electrodes; typically, at least one of these electrodes is transparent. OLEDs are used to create digital displays in devices such as television screens, computer monitors, portable systems such as smartphones, handheld game consoles and PDAs. A major area of research is the development of white OLED devices for use in solid-state lighting applications.[3][4][5] There are two main families of OLED: those based on small molecules and those employing polymers. Adding mobile ions to an OLED creates a light-emitting electrochemical cell (LEC) which has a slightly different mode of operation. An OLED display can be driven with a passive-matrix (PMOLED) or active-matrix (AMOLED) control scheme. In the PMOLED scheme, each row (and line) in the display is controlled sequentially, one by one,[6] whereas AMOLED control uses a thin-film transistor backplane to directly access and switch each individual pixel on or off, allowing for higher resolution and larger display sizes. An OLED display works without a backlight because it emits visible light. Thus, it can display deep black levels and can be thinner and lighter than a liquid crystal display (LCD). In low ambient light conditions (such as a dark room), an OLED screen can achieve a higher contrast ratio than an LCD, regardless of whether the LCD uses cold cathode fluorescent lamps or an LED backlight. OLED displays are made in the same way as LCDs, but after TFT (for active matrix displays), addressable grid (for passive matrix displays) or ITO segment (for segment displays) formation, the display is coated with hole injection, transport and blocking layers, as well with electroluminescent material after the 2 first layers, after which ITO or metal may be applied again as a cathode and later the entire stack of materials is encapsulated. The TFT layer, addressable grid or ITO segments serve as or are connected to the anode, which may be made of ITO or metal.[7][8] OLEDs can be made flexible and transparent, with transparent displays being used in smartphones with optical fingerprint scanners and flexible displays being used in foldable smartphones. André Bernanose and co-workers at the Nancy-Université in France made the first observations of electroluminescence in organic materials in the early 1950s. They applied high alternating voltages in air to materials such as acridine orange, either deposited on or dissolved in cellulose or cellophane thin films. The proposed mechanism was either direct excitation of the dye molecules or excitation of electrons.[9][10][11][12] In 1960, Martin Pope and some of his co-workers at New York University developed ohmic dark-injecting electrode contacts to organic crystals.[13][14][15] They further described the necessary energetic requirements (work functions) for hole and electron injecting electrode contacts. These contacts are the basis of charge injection in all modern OLED devices. Pope's group also first observed direct current (DC) electroluminescence under vacuum on a single pure crystal of anthracene and on anthracene crystals doped with tetracene in 1963[16] using a small area silver electrode at 400 volts. The proposed mechanism was field-accelerated electron excitation of molecular fluorescence. Pope's group reported in 1965[17] that in the absence of an external electric field, the electroluminescence in anthracene crystals is caused by the recombination of a thermalized electron and hole, and that the conducting level of anthracene is higher in energy than the exciton energy level. Also in 1965, Wolfgang Helfrich and W. G. Schneider of the National Research Council in Canada produced double injection recombination electroluminescence for the first time in an anthracene single crystal using hole and electron injecting electrodes,[18] the forerunner of modern double-injection devices. In the same year, Dow Chemical researchers patented a method of preparing electroluminescent cells using high-voltage (500–1500 V) AC-driven (100–3000 Hz) electrically insulated one millimetre thin layers of a melted phosphor consisting of ground anthracene powder, tetracene, and graphite powder.[19] Their proposed mechanism involved electronic excitation at the contacts between the graphite particles and the anthracene molecules. Roger Partridge made the first observation of electroluminescence from polymer films at the National Physical Laboratory in the United Kingdom. The device consisted of a film of poly(N-vinylcarbazole) up to 2.2 micrometers thick located between two charge injecting electrodes. The results of the project were patented in 1975[20] and published in 1983.[21][22][23][24] Chemists Ching Wan Tang and Steven Van Slyke at Eastman Kodak built the first practical OLED device in 1987.[25] This device used a two-layer structure with separate hole transporting and electron transporting layers such that recombination and light emission occurred in the middle of the organic layer; this resulted in a reduction in operating voltage and improvements in efficiency. Research into polymer electroluminescence culminated in 1990, with J. H. Burroughes et al. at the Cavendish Laboratory at Cambridge University, UK, reporting a high-efficiency green light-emitting polymer-based device using 100 nm thick films of poly(p-phenylene vinylene).[26] Moving from molecular to macromolecular materials solved the problems previously encountered with the long-term stability of the organic films and enabled high-quality films to be easily made.[27] Subsequent research developed multilayer polymers and the new field of plastic electronics and OLED research and device production grew rapidly.[28] In 1999, Kodak and Sanyo had entered into a partnership to jointly research, develop, and produce OLED displays. They announced the world's first 2.4-inch active-matrix, full-color OLED display in September the same year.[29] In September 2002, they presented a prototype of 15-inch HDTV format display based on white OLED with color filters at the CEATEC Japan.[30] Manufacturing of small molecule OLEDs was started in 1997 by Pioneer Corporation, followed by TDK in 2001 and Samsung-NEC Mobile Display (SNMD), which later became one of the world's largest OLED display manufacturers - Samsung Display, in 2002.[31] The Sony XEL-1, released in 2007, was the first OLED television.[32] Universal Display Corporation, one of the OLED materials companies, holds a number of patents concerning the commercialization of OLEDs that are used by major OLED manufacturers around the world.[33][34] On December 5, 2017, JOLED, the successor of Sony and Panasonic's printable OLED business units, began the world's first commercial shipment of inkjet printed OLED panels.[35][36] Dual panel or dual layer LCDs may be used instead of OLEDs, since OLEDs are difficult and expensive to make, unlike LCDs. A typical OLED is composed of a layer of organic materials situated between two electrodes, the anode and cathode, all deposited on a substrate. The organic molecules are electrically conductive as a result of delocalization of pi electrons caused by conjugation over part or all of the molecule. These materials have conductivity levels ranging from insulators to conductors, and are therefore considered organic semiconductors. The highest occupied and lowest unoccupied molecular orbitals (HOMO and LUMO) of organic semiconductors are analogous to the valence and conduction bands of inorganic semiconductors.[37] Originally, the most basic polymer OLEDs consisted of a single organic layer. One example was the first light-emitting device synthesised by J. H. Burroughes et al., which involved a single layer of poly(p-phenylene vinylene). However multilayer OLEDs can be fabricated with two or more layers in order to improve device efficiency. As well as conductive properties, different materials may be chosen to aid charge injection at electrodes by providing a more gradual electronic profile,[38] or block a charge from reaching the opposite electrode and being wasted.[39] Many modern OLEDs incorporate a simple bilayer structure, consisting of a conductive layer and an emissive layer. More recent[when?] developments in OLED architecture improves quantum efficiency (up to 19%) by using a graded heterojunction.[40] In the graded heterojunction architecture, the composition of hole and electron-transport materials varies continuously within the emissive layer with a dopant emitter. The graded heterojunction architecture combines the benefits of both conventional architectures by improving charge injection while simultaneously balancing charge transport within the emissive region.[41] During operation, a voltage is applied across the OLED such that the anode is positive with respect to the cathode. Anodes are picked based upon the quality of their optical transparency, electrical conductivity, and chemical stability.[42] A current of electrons flows through the device from cathode to anode, as electrons are injected into the LUMO of the organic layer at the cathode and withdrawn from the HOMO at the anode. This latter process may also be described as the injection of electron holes into the HOMO. Electrostatic forces bring the electrons and the holes towards each other and they recombine forming an exciton, a bound state of the electron and hole. This happens closer to the electron-transport layer part of the emissive layer, because in organic semiconductors holes are generally more mobile than electrons. The decay of this excited state results in a relaxation of the energy levels of the electron, accompanied by emission of radiation whose frequency is in the visible region. The frequency of this radiation depends on the band gap of the material, in this case the difference in energy between the HOMO and LUMO. As electrons and holes are fermions with half integer spin, an exciton may either be in a singlet state or a triplet state depending on how the spins of the electron and hole have been combined. Statistically three triplet excitons will be formed for each singlet exciton. Decay from triplet states (phosphorescence) is spin forbidden, increasing the timescale of the transition and limiting the internal efficiency of fluorescent devices. Phosphorescent organic light-emitting diodes make use of spin–orbit interactions to facilitate intersystem crossing between singlet and triplet states, thus obtaining emission from both singlet and triplet states and improving the internal efficiency. Indium tin oxide (ITO) is commonly used as the anode material. It is transparent to visible light and has a high work function which promotes injection of holes into the HOMO level of the organic layer. A second conductive (injection) layer is typically added, which may consist of PEDOT:PSS,[43] as the HOMO level of this material generally lies between the work function of ITO and the HOMO of other commonly used polymers, reducing the energy barriers for hole injection. Metals such as barium and calcium are often used for the cathode as they have low work functions which promote injection of electrons into the LUMO of the organic layer.[44] Such metals are reactive, so they require a capping layer of aluminium to avoid degradation. Two secondary benefits of the aluminum capping layer include robustness to electrical contacts and the back reflection of emitted light out to the transparent ITO layer. Experimental research has proven that the properties of the anode, specifically the anode/hole transport layer (HTL) interface topography plays a major role in the efficiency, performance, and lifetime of organic light-emitting diodes. Imperfections in the surface of the anode decrease anode-organic film interface adhesion, increase electrical resistance, and allow for more frequent formation of non-emissive dark spots in the OLED material adversely affecting lifetime. Mechanisms to decrease anode roughness for ITO/glass substrates include the use of thin films and self-assembled monolayers. Also, alternative substrates and anode materials are being considered to increase OLED performance and lifetime. Possible examples include single crystal sapphire substrates treated with gold (Au) film anodes yielding lower work functions, operating voltages, electrical resistance values, and increasing lifetime of OLEDs.[45] Single carrier devices are typically used to study the kinetics and charge transport mechanisms of an organic material and can be useful when trying to study energy transfer processes. As current through the device is composed of only one type of charge carrier, either electrons or holes, recombination does not occur and no light is emitted. For example, electron only devices can be obtained by replacing ITO with a lower work function metal which increases the energy barrier of hole injection. Similarly, hole only devices can be made by using a cathode made solely of aluminium, resulting in an energy barrier too large for efficient electron injection.[46][47][48] Balanced charge injection and transfer are required to get high internal efficiency, pure emission of luminance layer without contaminated emission from charge transporting layers, and high stability. A common way to balance charge is optimizing the thickness of the charge transporting layers but is hard to control. Another way is using the exciplex. Exciplex formed between hole-transporting (p-type) and electron-transporting (n-type) side chains to localize electron-hole pairs. Energy is then transferred to luminophore and provide high efficiency. An example of using exciplex is grafting Oxadiazole and carbazole side units in red diketopyrrolopyrrole-doped Copolymer main chain shows improved external quantum efficiency and color purity in no optimized OLED.[49] Efficient OLEDs using small molecules were first developed by Ching W. Tang et al.[25] at Eastman Kodak. The term OLED traditionally refers specifically to this type of device, though the term SM-OLED is also in use.[37] Molecules commonly used in OLEDs include organometallic chelates (for example Alq3, used in the organic light-emitting device reported by Tang et al.), fluorescent and phosphorescent dyes and conjugated dendrimers. A number of materials are used for their charge transport properties, for example triphenylamine and derivatives are commonly used as materials for hole transport layers.[50] Fluorescent dyes can be chosen to obtain light emission at different wavelengths, and compounds such as perylene, rubrene and quinacridone derivatives are often used.[51] Alq3 has been used as a green emitter, electron transport material and as a host for yellow and red emitting dyes. The production of small molecule devices and displays usually involves thermal evaporation in a vacuum. This makes the production process more expensive and of limited use for large-area devices, than other processing techniques. However, contrary to polymer-based devices, the vacuum deposition process enables the formation of well controlled, homogeneous films, and the construction of very complex multi-layer structures. This high flexibility in layer design, enabling distinct charge transport and charge blocking layers to be formed, is the main reason for the high efficiencies of the small molecule OLEDs. Coherent emission from a laser dye-doped tandem SM-OLED device, excited in the pulsed regime, has been demonstrated.[52] The emission is nearly diffraction limited with a spectral width similar to that of broadband dye lasers.[53] Researchers report luminescence from a single polymer molecule, representing the smallest possible organic light-emitting diode (OLED) device.[54] Scientists will be able to optimize substances to produce more powerful light emissions. Finally, this work is a first step towards making molecule-sized components that combine electronic and optical properties. Similar components could form the basis of a molecular computer.[55] Polymer light-emitting diodes (PLED, P-OLED), also light-emitting polymers (LEP), involve an electroluminescent conductive polymer that emits light when connected to an external voltage. They are used as a thin film for full-spectrum colour displays. Polymer OLEDs are quite efficient and require a relatively small amount of power for the amount of light produced. Vacuum deposition is not a suitable method for forming thin films of polymers. However, polymers can be processed in solution, and spin coating is a common method of depositing thin polymer films. This method is more suited to forming large-area films than thermal evaporation. No vacuum is required, and the emissive materials can also be applied on the substrate by a technique derived from commercial inkjet printing.[56][57] However, as the application of subsequent layers tends to dissolve those already present, formation of multilayer structures is difficult with these methods. The metal cathode may still need to be deposited by thermal evaporation in vacuum. An alternative method to vacuum deposition is to deposit a Langmuir-Blodgett film. Typical polymers used in PLED displays include derivatives of poly(p-phenylene vinylene) and polyfluorene. Substitution of side chains onto the polymer backbone may determine the colour of emitted light[58] or the stability and solubility of the polymer for performance and ease of processing.[59] While unsubstituted poly(p-phenylene vinylene) (PPV) is typically insoluble, a number of PPVs and related poly(naphthalene vinylene)s (PNVs) that are soluble in organic solvents or water have been prepared via ring opening metathesis polymerization.[60][61][62] These water-soluble polymers or conjugated poly electrolytes (CPEs) also can be used as hole injection layers alone or in combination with nanoparticles like graphene.[63] Phosphorescent organic light-emitting diodes use the principle of electrophosphorescence to convert electrical energy in an OLED into light in a highly efficient manner,[65][66] with the internal quantum efficiencies of such devices approaching 100%.[67] Typically, a polymer such as poly(N-vinylcarbazole) is used as a host material to which an organometallic complex is added as a dopant. Iridium complexes[66] such as Ir(mppy)3[64] are currently[when?] the focus of research, although complexes based on other heavy metals such as platinum[65] have also been used. The heavy metal atom at the centre of these complexes exhibits strong spin-orbit coupling, facilitating intersystem crossing between singlet and triplet states. By using these phosphorescent materials, both singlet and triplet excitons will be able to decay radiatively, hence improving the internal quantum efficiency of the device compared to a standard OLED where only the singlet states will contribute to emission of light. Applications of OLEDs in solid state lighting require the achievement of high brightness with good CIE coordinates (for white emission). The use of macromolecular species like polyhedral oligomeric silsesquioxanes (POSS) in conjunction with the use of phosphorescent species such as Ir for printed OLEDs have exhibited brightnesses as high as 10,000 cd/m2.[68] All OLED displays (passive and active matrix) use a driver IC, often mounted using Chip-on-glass (COG), using an Anisotropic conductive film.[73] The most commonly used patterning method for organic light-emitting displays is shadow masking during film deposition,[74] also called the "RGB side-by-side" method or "RGB pixelation" method. Metal sheets with multiple apertures made of low thermal expansion material, such as nickel alloy, are placed between the heated evaporation source and substrate, so that the organic or inorganic material from the evaporation source is deposited only to the desired location on the substrate. Almost all small OLED displays for smartphones have been manufactured using this method. Fine metal masks (FMMs) made by photochemical machining, reminiscent of old CRT shadow masks, are used in this process. The dot density of the mask will determine the pixel density of the finished display.[75] Fine Hybrid Masks (FHMs) are lighter than FFMs, reducing bending caused by the mask's own weight, and are made using an electroforming process.[76][77] This method requires heating the electroluminescent materials at 300 °C in a high vacuum of 10-5 Pa using an electron beam. An oxygen meter ensures that no oxygen enters the chamber as it could damage (through oxidation) the electroluminescent material, which is in powder form. The mask is aligned with the mother substrate before every use, and it is placed just below the substrate. The substrate and mask assembly are placed at the top of the deposition chamber.[78] Afterwards, the electrode layer is deposited, by subjecting silver and aluminum powder to 1000 °C, using an electron beam.[79] Shadow masks allow for high pixel densities of up to 2,250 PPI. High pixel densities are necessary for virtual reality headsets.[80] Although the shadow-mask patterning method is a mature technology used from the first OLED manufacturing, it causes many issues like dark spot formation due to mask-substrate contact or misalignment of the pattern due to the deformation of shadow mask. Such defect formation can be regarded as trivial when the display size is small, however it causes serious issues when a large display is manufactured, which brings significant production yield loss. To circumvent such issues, white emission devices with 4-sub-pixel color filters (white, red, green and blue) have been used for large televisions. In spite of the light absorption by the color filter, state-of-the-art OLED televisions can reproduce color very well, such as 100% NTSC, and consume little power at the same time. This is done by using an emission spectrum with high human-eye sensitivity, special color filters with a low spectrum overlap, and performance tuning with color statistics into consideration.[81] This approach is also called the "Color-by-white" method. There are other types of emerging patterning technologies to increase the manufacturabiltiy of OLEDs. Patternable organic light-emitting devices use a light or heat activated electroactive layer. A latent material (PEDOT-TMA) is included in this layer that, upon activation, becomes highly efficient as a hole injection layer. Using this process, light-emitting devices with arbitrary patterns can be prepared.[82] Colour patterning can be accomplished by means of a laser, such as a radiation-induced sublimation transfer (RIST).[83] Organic vapour jet printing (OVJP) uses an inert carrier gas, such as argon or nitrogen, to transport evaporated organic molecules (as in organic vapour phase deposition). The gas is expelled through a micrometre-sized nozzle or nozzle array close to the substrate as it is being translated. This allows printing arbitrary multilayer patterns without the use of solvents. Like ink jet material depositioning, inkjet etching (IJE) deposits precise amounts of solvent onto a substrate designed to selectively dissolve the substrate material and induce a structure or pattern. Inkjet etching of polymer layers in OLED's can be used to increase the overall out-coupling efficiency. In OLEDs, light produced from the emissive layers of the OLED is partially transmitted out of the device and partially trapped inside the device by total internal reflection (TIR). This trapped light is wave-guided along the interior of the device until it reaches an edge where it is dissipated by either absorption or emission. Inkjet etching can be used to selectively alter the polymeric layers of OLED structures to decrease overall TIR and increase out-coupling efficiency of the OLED. Compared to a non-etched polymer layer, the structured polymer layer in the OLED structure from the IJE process helps to decrease the TIR of the OLED device. IJE solvents are commonly organic instead of water-based due to their non-acidic nature and ability to effectively dissolve materials at temperatures under the boiling point of water.[84] Transfer-printing is an emerging technology to assemble large numbers of parallel OLED and AMOLED devices efficiently. It takes advantage of standard metal deposition, photolithography, and etching to create alignment marks commonly on glass or other device substrates. Thin polymer adhesive layers are applied to enhance resistance to particles and surface defects. Microscale ICs are transfer-printed onto the adhesive surface and then baked to fully cure adhesive layers. An additional photosensitive polymer layer is applied to the substrate to account for the topography caused by the printed ICs, reintroducing a flat surface. Photolithography and etching removes some polymer layers to uncover conductive pads on the ICs. Afterwards, the anode layer is applied to the device backplane to form bottom electrode. OLED layers are applied to the anode layer with conventional vapor deposition, and covered with a conductive metal electrode layer. As of 2011[update] transfer-printing was capable to print onto target substrates up to 500mm X 400mm. This size limit needs to expand for transfer-printing to become a common process for the fabrication of large OLED/AMOLED displays.[85] Experimental OLED displays using conventional photolithography techniques instead of FMMs have been demonstrated, allowing for large substrate sizes (as it eliminates the need for a mask that needs to be as large as the substate) and good yield control.[86] For a high resolution display like a TV, a TFT backplane is necessary to drive the pixels correctly. As of 2019, low temperature polycrystalline silicon (LTPS) – thin-film transistor (TFT) is widely used for commercial AMOLED displays. LTPS-TFT has variation of the performance in a display, so various compensation circuits have been reported.[87] Due to the size limitation of the excimer laser used for LTPS, the AMOLED size was limited. To cope with the hurdle related to the panel size, amorphous-silicon/microcrystalline-silicon backplanes have been reported with large display prototype demonstrations.[88] An IGZO backplane can also be used. The different manufacturing process of OLEDs has several advantages over flat panel displays made with LCD technology. The biggest technical problem for OLEDs is the limited lifetime of the organic materials. One 2008 technical report on an OLED TV panel found that after 1,000 hours, the blue luminance degraded by 12%, the red by 7% and the green by 8%.[95] In particular, blue OLEDs historically have had a lifetime of around 14,000 hours to half original brightness (five years at eight hours per day) when used for flat-panel displays. This is lower than the typical lifetime of LCD, LED or PDP technology; each currently[when?] is rated for about 25,000–40,000 hours to half brightness, depending on manufacturer and model. One major challenge for OLED displays is the formation of dark spots due to the ingress of oxygen and moisture, which degrades the organic material over time whether or not the display is powered.[96][97][98] In 2016, LG Electronics reported an expected lifetime of 100,000 hours, up from 36,000 hours in 2013.[99] However, Rtings tested from 2018 to 2019 several OLED TVs and found that their expected lifetime is of just 9064 hours before burn-in becomes noticeable.[100] A US Department of Energy paper shows that the expected lifespans of OLED lighting products goes down with increasing brightness, with an expected lifespan of 40,000 hours at 25% brightness, or 10,000 hours at 100% brightness.[101] Degradation occurs because of the accumulation of nonradiative recombination centers and luminescence quenchers in the emissive zone. It is said that the chemical breakdown in the semiconductors occurs in four steps: However, some manufacturers' displays aim to increase the lifespan of OLED displays, pushing their expected life past that of LCD displays by improving light outcoupling, thus achieving the same brightness at a lower drive current.[103][104] In 2007, experimental OLEDs were created which can sustain 400 cd/m2 of luminance for over 198,000 hours for green OLEDs and 62,000 hours for blue OLEDs.[105] In 2012, OLED lifetime to half of the initial brightness was improved to 900,000 hours for red, 1,450,000 hours for yellow and 400,000 hours for green at an initial luminance of 1,000 cd/m2.[106] Proper encapsulation is critical for prolonging an OLED display's lifetime, as the OLED light emitting electroluminescent materials are sensitive to oxygen and moisture. When exposed to moisture or oxygen, the electroluminescent materials in OLEDs degrade as they oxidize, generating black spots and reducing or shrinking the area that emits light, reducing light output. This reduction can occur in a pixel by pixel basis. This can also lead to delamination of the electrode layer, eventually leading to complete panel failure. Degradation occurs 3 times faster when exposed to moisture than when exposed to oxygen. Encapsulation can be performed by applying an epoxy adhesive with dessicant,[107] by laminating a glass sheet with epoxy glue and dessicant[108] followed by vacuum degassing, or by using Thin-Film Encapsulation (TFE), which is a multi-layer coating of alternating organic and inorganic layers. The organic layers are applied using inkjet printing, and the inorganic layers are applied using Atomic Layer Deposition (ALD). The encapsulation process is carried out under a nitrogen environment, using UV-curable LOCA glue and the electroluminescent and electrode material deposition processes are carried out under a high vacuum. The encapsulation and material deposition processes are carried out by a single machine, after the Thin-film transistors have been applied. The transistors are applied in a process that is the same for LCDs. The electroluminescent materials can also be applied using inkjet printing.[109][110][111][79][112][107][113][112] The OLED material used to produce blue light degrades much more rapidly than the materials used to produce other colors; in other words, blue light output will decrease relative to the other colors of light. This variation in the differential color output will change the color balance of the display, and is much more noticeable than a uniform decrease in overall luminance.[114] This can be avoided partially by adjusting the color balance, but this may require advanced control circuits and input from a knowledgeable user. More commonly, though, manufacturers optimize the size of the R, G and B subpixels to reduce the current density through the subpixel in order to equalize lifetime at full luminance. For example, a blue subpixel may be 100% larger than the green subpixel. The red subpixel may be 10% larger than the green. Improvements to the efficiency and lifetime of blue OLEDs is vital to the success of OLEDs as replacements for LCD technology. Considerable research has been invested in developing blue OLEDs with high external quantum efficiency, as well as a deeper blue color.[115][116][117] External quantum efficiency values of 20% and 19% have been reported for red (625 nm) and green (530 nm) diodes, respectively.[118][119] However, blue diodes (430 nm) have only been able to achieve maximum external quantum efficiencies in the range of 4% to 6%.[120] Since 2012, research focuses on organic materials exhibiting thermally activated delayed fluorescence (TADF), discovered at Kyushu University OPERA and UC Santa Barbara CPOS. TADF would allow stable and high-efficiency solution processable (meaning that the organic materials are layered in solutions producing thinner layers) blue emitters, with internal quantum efficiencies reaching 100%.[121] Blue TADF emitters are expected to market by 2020[122][123] and would be used for WOLED displays with phosphorescent color filters, as well as blue OLED displays with ink-printed QD color filters. Water can instantly damage the organic materials of the displays. Therefore, improved sealing processes are important for practical manufacturing. Water damage especially may limit the longevity of more flexible displays.[124] As an emissive display technology, OLEDs rely completely upon converting electricity to light, unlike most LCDs which are to some extent reflective. E-paper leads the way in efficiency with ~ 33% ambient light reflectivity, enabling the display to be used without any internal light source. The metallic cathode in an OLED acts as a mirror, with reflectance approaching 80%, leading to poor readability in bright ambient light such as outdoors. However, with the proper application of a circular polarizer and antireflective coatings, the diffuse reflectance can be reduced to less than 0.1%. With 10,000 fc incident illumination (typical test condition for simulating outdoor illumination), that yields an approximate photopic contrast of 5:1. Advances in OLED technologies, however, enable OLEDs to become actually better than LCDs in bright sunlight. The AMOLED display in the Galaxy S5, for example, was found to outperform all LCD displays on the market in terms of brightness and reflectance.[125] While an OLED will consume around 40% of the power of an LCD displaying an image that is primarily black, for the majority of images it will consume 60–80% of the power of an LCD. However, an OLED can use more than 300% power to display an image with a white background, such as a document or web site.[126] This can lead to reduced battery life in mobile devices when white backgrounds are used. Almost all OLED manufacturers rely on material deposition equipment that is only made by a handful of companies,[127] the most notable one being Canon Tokki, a unit of Canon Inc. Canon Tokki is reported to have a near-monopoly of the giant OLED-manufacturing vacuum machines, notable for their 100-metre (330 ft) size.[128] Apple has relied solely on Canon Tokki in its bid to introduce its own OLED displays for the iPhones released in 2017.[129] The electroluminescent materials needed for OLEDs are also made by a handful of companies, some of them being Merck, Universal Display Corporation and LG Chem.[130] The machines that apply these materials can operate continuously for 5–6 days, and can process a mother substrate in 5 minutes.[131] OLED technology is used in commercial applications such as displays for mobile phones and portable digital media players, car radios and digital cameras among others, as well as lighting.[132] Such portable display applications favor the high light output of OLEDs for readability in sunlight and their low power drain. Portable displays are also used intermittently, so the lower lifespan of organic displays is less of an issue. Prototypes have been made of flexible and rollable displays which use OLEDs' unique characteristics. Applications in flexible signs and lighting are also being developed.[133] OLED lighting offers several advantages over LED lighting, such as higher quality illumination, more diffuse light source, and panel shapes.[132] Philips Lighting have made OLED lighting samples under the brand name "Lumiblade" available online[134] and Novaled AG based in Dresden, Germany, introduced a line of OLED desk lamps called "Victory" in September, 2011.[135] Nokia introduced OLED mobile phones including the N85 and the N86 8MP, both of which feature an AMOLED display. OLEDs have also been used in most Motorola and Samsung color cell phones, as well as some HTC, LG and Sony Ericsson models.[136] OLED technology can also be found in digital media players such as the Creative ZEN V, the iriver clix, the Zune HD and the Sony Walkman X Series. The Google and HTC Nexus One smartphone includes an AMOLED screen, as does HTC's own Desire and Legend phones. However, due to supply shortages of the Samsung-produced displays, certain HTC models will use Sony's SLCD displays in the future,[137] while the Google and Samsung Nexus S smartphone will use "Super Clear LCD" instead in some countries.[138] OLED displays were used in watches made by Fossil (JR-9465) and Diesel (DZ-7086). Other manufacturers of OLED panels include Anwell Technologies Limited (Hong Kong),[139] AU Optronics (Taiwan),[140] Chimei Innolux Corporation (Taiwan),[141] LG (Korea),[142] and others.[143] In 2009, Shearwater Research introduced the Predator as the first color OLED diving computer available with a user replaceable battery.[144][145] BlackBerry Limited, the maker of BlackBerry smartphones, uses OLED displays in their BlackBerry 10 devices. DuPont stated in a press release in May 2010, that they can produce a 50-inch OLED TV in two minutes with a new printing technology. If this can be scaled up in terms of manufacturing, then the total cost of OLED TVs would be greatly reduced. DuPont also states that OLED TVs made with this less expensive technology can last up to 15 years if left on for a normal eight-hour day.[146][147] The use of OLEDs may be subject to patents held by Universal Display Corporation, Eastman Kodak, DuPont, General Electric, Royal Philips Electronics, numerous universities and others.[148] There are by now[when?] thousands of patents associated with OLEDs, both from larger corporations and smaller technology companies.[37] Flexible OLED displays have been used by manufacturers to create curved displays such as the Galaxy S7 Edge but they were not in devices that can be flexed by the users.[149] Samsung demonstrated a roll-out display in 2016.[150] On October 31, 2018, Royole, a Chinese electronics company, unveiled the world's first foldable screen phone featuring a flexible OLED display.[151] On February 20, 2019, Samsung announced the Samsung Galaxy Fold with a foldable OLED display from Samsung Display, its majority-owned subsidiary.[152] At MWC 2019 on February 25, 2019, Huawei announced the Huawei Mate X featuring a foldable OLED display from BOE.[153][154] The 2010s also saw the wide adoption of TGP (Tracking Gate-line in Pixel), which moves the driving circuitry from the borders of the display to in between the display's pixels, allowing for narrow bezels.[155] Textiles incorporating OLEDs are an innovation in the fashion world and pose for a way to integrate lighting to bring inert objects to a whole new level of fashion. The hope is to combine the comfort and low cost properties of textile with the OLEDs properties of illumination and low energy consumption. Although this scenario of illuminated clothing is highly plausible, challenges are still a road block. Some issues include: the lifetime of the OLED, rigidness of flexible foil substrates, and the lack of research in making more fabric like photonic textiles.[156] A Japanese manufacturer Pioneer Electronic Corporation produced the first car stereos with a monochrome OLED display, which was also the world's first OLED product.[157] The Aston Martin DB9 incorporated the world's first automotive OLED display,[158] which was manufactured by Yazaki,[159] followed by the 2004 Jeep Grand Cherokee and the Chevrolet Corvette C6.[160] The number of automakers using OLEDs is still rare and limited to the high-end of the market. For example, the 2010 Lexus RX features an OLED display instead of a thin film transistor (TFT-LCD) display. The 2015 Hyundai Sonata and Kia Soul EV use a 3.5" white PMOLED display. By 2004, Samsung Display, a subsidiary of South Korea's largest conglomerate and a former Samsung-NEC joint venture, was the world's largest OLED manufacturer, producing 40% of the OLED displays made in the world,[161] and as of 2010, has a 98% share of the global AMOLED market.[162] The company is leading the world of OLED industry, generating $100.2 million out of the total $475 million revenues in the global OLED market in 2006.[163] As of 2006, it held more than 600 American patents and more than 2800 international patents, making it the largest owner of AMOLED technology patents.[163] Samsung SDI announced in 2005, the world's largest OLED TV at the time, at 21 inches (53 cm).[164] This OLED featured the highest resolution at the time, of 6.22 million pixels. In addition, the company adopted active matrix-based technology for its low power consumption and high-resolution qualities. This was exceeded in January 2008, when Samsung showcased the world's largest and thinnest OLED TV at the time, at 31 inches (78 cm) and 4.3 mm.[165] In May 2008, Samsung unveiled an ultra-thin 12.1 inch (30 cm) laptop OLED display concept, with a 1,280×768 resolution with infinite contrast ratio.[166] According to Woo Jong Lee, Vice President of the Mobile Display Marketing Team at Samsung SDI, the company expected OLED displays to be used in notebook PCs as soon as 2010.[167] In October 2008, Samsung showcased the world's thinnest OLED display, also the first to be "flappable" and bendable.[168] It measures just 0.05 mm (thinner than paper), yet a Samsung staff member said that it is "technically possible to make the panel thinner".[168] To achieve this thickness, Samsung etched an Zinc oxide is an inorganic compound with the formula ZnO. ZnO is a white powder that is insoluble in water, and it is widely used as an additive in numerous materials and products including rubbers, plastics, ceramics, glass, cement, lubricants,[10] paints, ointments, adhesives, sealants, pigments, foods, batteries, ferrites, fire retardants, and first-aid tapes. Although it occurs naturally as the mineral zincite, most Zinc oxide is produced synthetically.[11] ZnO is a wide-band gap semiconductor of the II-VI semiconductor group. The native doping of the semiconductor due to oxygen vacancies or zinc interstitials is n-type.[12] This semiconductor has several favorable properties, including good transparency, high electron mobility, wide band gap, and strong room-temperature luminescence. Those properties are valuable in emerging applications for: transparent electrodes in liquid crystal displays, energy-saving or heat-protecting windows, and electronics as thin-film transistors and light-emitting diodes. Pure ZnO is a white powder, but in nature it occurs as the rare mineral zincite, which usually contains manganese and other impurities that confer a yellow to red color.[13] Crystalline Zinc oxide is thermochromic, changing from white to yellow when heated in air and reverting to white on cooling.[14] This color change is caused by a small loss of oxygen to the environment at high temperatures to form the non-stoichiometric Zn1+xO, where at 800 °C, x = 0.00007.[14] Zinc oxide is an amphoteric oxide. It is nearly insoluble in water, but it will dissolve in most acids, such as hydrochloric acid:[15] Solid Zinc oxide will also dissolve in alkalis to give soluble zincates: ZnO reacts slowly with fatty acids in oils to produce the corresponding carboxylates, such as oleate or stearate. ZnO forms cement-like products when mixed with a strong aqueous solution of zinc chloride and these are best described as zinc hydroxy chlorides.[16] This cement was used in dentistry.[17] ZnO also forms cement-like material when treated with phosphoric acid; related materials are used in dentistry.[17] A major component of zinc phosphate cement produced by this reaction is hopeite, Zn3(PO4)2·4H2O.[18] ZnO decomposes into zinc vapor and oxygen at around 1975 °C with a standard oxygen pressure. In a carbothermic reaction, heating with carbon converts the oxide into zinc vapor at a much lower temperature (around 950 °C).[19] Zinc oxide can react violently with aluminium and magnesium powders, with chlorinated rubber and linseed oil on heating causing fire and explosion hazard.[20][21] It reacts with hydrogen sulfide to give zinc sulfide. This reaction is used commercially.[citation needed] Zinc oxide crystallizes in two main forms, hexagonal wurtzite[22] and cubic zincblende. The wurtzite structure is most stable at ambient conditions and thus most common. The zincblende form can be stabilized by growing ZnO on substrates with cubic lattice structure. In both cases, the zinc and oxide centers are tetrahedral, the most characteristic geometry for Zn(II). ZnO converts to the rocksalt motif at relatively high pressures about 10 GPa.[12] The many remarkable medical properties of creams containing ZnO can be explained by its elastic softness, which is characteristic of tetrahedral coordinated binary compounds close to the transition to octahedral structures. [23] Hexagonal and zincblende polymorphs have no inversion symmetry (reflection of a crystal relative to any given point does not transform it into itself). This and other lattice symmetry properties result in piezoelectricity of the hexagonal and zincblende ZnO, and pyroelectricity of hexagonal ZnO. The hexagonal structure has a point group 6 mm (Hermann-Mauguin notation) or C6v (Schoenflies notation), and the space group is P63mc or C6v4. The lattice constants are a = 3.25 Å and c = 5.2 Å; their ratio c/a ~ 1.60 is close to the ideal value for hexagonal cell c/a = 1.633.[24] As in most group II-VI materials, the bonding in ZnO is largely ionic (Zn2+–O2−) with the corresponding radii of 0.074 nm for Zn2+ and 0.140 nm for O2−. This property accounts for the preferential formation of wurtzite rather than zinc blende structure,[25] as well as the strong piezoelectricity of ZnO. Because of the polar Zn-O bonds, zinc and oxygen planes are electrically charged. To maintain electrical neutrality, those planes reconstruct at atomic level in most relative materials, but not in ZnO – its surfaces are atomically flat, stable and exhibit no reconstruction. This anomaly of ZnO is not fully explained.[26] However, studies using wurtzoid structures explained the origin of surface flatness and the absence of reconstruction at ZnO wurtzite surfaces[27] in addition to the origin of charges on ZnO planes. ZnO is a relatively soft material with approximate hardness of 4.5 on the Mohs scale.[10] Its elastic constants are smaller than those of relevant III-V semiconductors, such as GaN. The high heat capacity and heat conductivity, low thermal expansion and high melting temperature of ZnO are beneficial for ceramics.[28] The E2 optical phonon in ZnO exhibits an unusually long lifetime of 133 ps at 10 K.[29] Among the tetrahedrally bonded semiconductors, it has been stated that ZnO has the highest piezoelectric tensor, or at least one comparable to that of GaN and AlN.[30] This property makes it a technologically important material for many piezoelectrical applications, which require a large electromechanical coupling. ZnO has a relatively large direct band gap of ~3.3 eV at room temperature. Advantages associated with a large band gap include higher breakdown voltages, ability to sustain large electric fields, lower electronic noise, and high-temperature and high-power operation. The band gap of ZnO can further be tuned to ~3–4 eV by its alloying with magnesium oxide or cadmium oxide.[12] Most ZnO has n-type character, even in the absence of intentional doping. Nonstoichiometry is typically the origin of n-type character, but the subject remains controversial.[31] An alternative explanation has been proposed, based on theoretical calculations, that unintentional substitutional hydrogen impurities are responsible.[32] Controllable n-type doping is easily achieved by substituting Zn with group-III elements such as Al, Ga, In or by substituting oxygen with group-VII elements chlorine or iodine.[33] Reliable p-type doping of ZnO remains difficult. This problem originates from low solubility of p-type dopants and their compensation by abundant n-type impurities. This problem is observed with GaN and ZnSe. Measurement of p-type in "intrinsically" n-type material is complicated by the inhomogeneity of samples.[34] Current limitations to p-doping limit electronic and optoelectronic applications of ZnO, which usually require junctions of n-type and p-type material. Known p-type dopants include group-I elements Li, Na, K; group-V elements N, P and As; as well as copper and silver. However, many of these form deep acceptors and do not produce significant p-type conduction at room temperature.[12] Electron mobility of ZnO strongly varies with temperature and has a maximum of ~2000 cm2/(V·s) at 80 K.[35] Data on hole mobility are scarce with values in the range 5–30 cm2/(V·s).[36] ZnO discs, acting as a varistor, are the active material in most surge arresters.[37][38] For industrial use, ZnO is produced at levels of 105 tons per year[13] by three main processes:[28] In the indirect or French process, metallic zinc is melted in a graphite crucible and vaporized at temperatures above 907 °C (typically around 1000 °C). Zinc vapor reacts with the oxygen in the air to give ZnO, accompanied by a drop in its temperature and bright luminescence. Zinc oxide particles are transported into a cooling duct and collected in a bag house. This indirect method was popularized by LeClaire (France) in 1844 and therefore is commonly known as the French process. Its product normally consists of agglomerated Zinc oxide particles with an average size of 0.1 to a few micrometers. By weight, most of the world's Zinc oxide is manufactured via French process. The direct or American process starts with diverse contaminated zinc composites, such as zinc ores or smelter by-products. The zinc precursors are reduced (carbothermal reduction) by heating with a source of carbon such as anthracite to produce zinc vapor, which is then oxidized as in the indirect process. Because of the lower purity of the source material, the final product is also of lower quality in the direct process as compared to the indirect one. A small amount of industrial production involves wet chemical processes, which start with aqueous solutions of zinc salts, from which zinc carbonate or zinc hydroxide is precipitated. The solid precipitate is then calcined at temperatures around 800 °C. Numerous specialised methods exist for producing ZnO for scientific studies and niche applications. These methods can be classified by the resulting ZnO form (bulk, thin film, nanowire), temperature ("low", that is close to room temperature or "high", that is T ~ 1000 °C), process type (vapor deposition or growth from solution) and other parameters. Large single crystals (many cubic centimeters) can be grown by the gas transport (vapor-phase deposition), hydrothermal synthesis,[26][39][40] or melt growth.[5] However, because of high vapor pressure of ZnO, growth from the melt is problematic. Growth by gas transport is difficult to control, leaving the hydrothermal method as a preference.[5] Thin films can be produced by chemical vapor deposition, metalorganic vapour phase epitaxy, electrodeposition, pulsed laser deposition, sputtering, sol-gel synthesis, atomic layer deposition, spray pyrolysis, etc. Ordinary white powdered Zinc oxide can be produced in the laboratory by electrolyzing a solution of sodium bicarbonate with a zinc anode. Zinc hydroxide and hydrogen gas are produced. The zinc hydroxide upon heating decomposes to Zinc oxide. Nanostructures of ZnO can be synthesized into a variety of morphologies including nanowires, nanorods, tetrapods, nanobelts, nanoflowers, nanoparticles etc. Nanostructures can be obtained with most above-mentioned techniques, at certain conditions, and also with the vapor-liquid-solid method.[26][41][42] The synthesis is typically carried out at temperatures of about 90 °C, in an equimolar aqueous solution of zinc nitrate and hexamine, the latter providing the basic environment. Certain additives, such as polyethylene glycol or polyethylenimine, can improve the aspect ratio of the ZnO nanowires.[43] Doping of the ZnO nanowires has been achieved by adding other metal nitrates to the growth solution.[44] The morphology of the resulting nanostructures can be tuned by changing the parameters relating to the precursor composition (such as the zinc concentration and pH) or to the thermal treatment (such as the temperature and heating rate).[45] Aligned ZnO nanowires on pre-seeded silicon, glass, and gallium nitride substrates have been grown using aqueous zinc salts such as zinc nitrate and zinc acetate in basic environments.[46] Pre-seeding substrates with ZnO creates sites for homogeneous nucleation of ZnO crystal during the synthesis. Common pre-seeding methods include in-situ thermal decomposition of zinc acetate crystallites, spincoating of ZnO nanoparticles and the use of physical vapor deposition methods to deposit ZnO thin films.[47][48] Pre-seeding can be performed in conjunction with top down patterning methods such as electron beam lithography and nanosphere lithography to designate nucleation sites prior to growth. Aligned ZnO nanowires can be used in dye-sensitized solar cells and field emission devices.[49][50] Zinc compounds were probably used by early humans, in processed and unprocessed forms, as a paint or medicinal ointment, but their composition is uncertain. The use of pushpanjan, probably Zinc oxide, as a salve for eyes and open wounds, is mentioned in the Indian medical text the Charaka Samhita, thought to date from 500 BC or before.[51] Zinc oxide ointment is also mentioned by the Greek physician Dioscorides (1st century AD).[52] Galen suggested treating ulcerating cancers with Zinc oxide,[53] as did Avicenna in his The Canon of Medicine. Zinc oxide is no longer used for treating skin cancer, though it is still used as an ingredient in products such as baby powder and creams against diaper rashes, calamine cream, anti-dandruff shampoos, and antiseptic ointments.[54] The Romans produced considerable quantities of brass (an alloy of zinc and copper) as early as 200 BC by a cementation process where copper was reacted with Zinc oxide.[55] The Zinc oxide is thought to have been produced by heating zinc ore in a shaft furnace. This liberated metallic zinc as a vapor, which then ascended the flue and condensed as the oxide. This process was described by Dioscorides in the 1st century AD.[56] Zinc oxide has also been recovered from zinc mines at Zawar in India, dating from the second half of the first millennium BC. This was presumably also made in the same way and used to produce brass.[52] From the 12th to the 16th century zinc and Zinc oxide were recognized and produced in India using a primitive form of the direct synthesis process. From India, zinc manufacture moved to China in the 17th century. In 1743, the first European zinc smelter was established in Bristol, United Kingdom.[57] The main usage of Zinc oxide (zinc white) was in paints and as an additive to ointments. Zinc white was accepted as a pigment in oil paintings by 1834 but it did not mix well with oil. This problem was solved by optimizing the synthesis of ZnO. In 1845, LeClaire in Paris was producing the oil paint on a large scale, and by 1850, zinc white was being manufactured throughout Europe. The success of zinc white paint was due to its advantages over the traditional white lead: zinc white is essentially permanent in sunlight, it is not blackened by sulfur-bearing air, it is non-toxic and more economical. Because zinc white is so "clean" it is valuable for making tints with other colors, but it makes a rather brittle dry film when unmixed with other colors. For example, during the late 1890s and early 1900s, some artists used zinc white as a ground for their oil paintings. All those paintings developed cracks over the years.[58] In recent times, most Zinc oxide was used in the rubber industry to resist corrosion. In the 1970s, the second largest application of ZnO was photocopying. High-quality ZnO produced by the "French process" was added to photocopying paper as a filler. This application was soon displaced by titanium.[28] The applications of Zinc oxide powder are numerous, and the principal ones are summarized below. Most applications exploit the reactivity of the oxide as a precursor to other zinc compounds. For material science applications, Zinc oxide has high refractive index, high thermal conductivity, binding, antibacterial and UV-protection properties. Consequently, it is added into materials and products including plastics, ceramics, glass, cement,[59] rubber, lubricants,[10] paints, ointments, adhesive, sealants, concrete manufacturing, pigments, foods, batteries, ferrites, fire retardants, etc.[60] Between 50% and 60% of ZnO use is in the rubber industry.[61] Zinc oxide along with stearic acid is used in the vulcanization of rubber[28][62][63] ZnO additive also protect rubber from fungi (see medical applications) and UV light. Ceramic industry consumes a significant amount of Zinc oxide, in particular in ceramic glaze and frit compositions. The relatively high heat capacity, thermal conductivity and high temperature stability of ZnO coupled with a comparatively low coefficient of expansion are desirable properties in the production of ceramics. ZnO affects the melting point and optical properties of the glazes, enamels, and ceramic formulations. Zinc oxide as a low expansion, secondary flux improves the elasticity of glazes by reducing the change in viscosity as a function of temperature and helps prevent crazing and shivering. By substituting ZnO for BaO and PbO, the heat capacity is decreased and the thermal conductivity is increased. Zinc in small amounts improves the development of glossy and brilliant surfaces. However, in moderate to high amounts, it produces matte and crystalline surfaces. With regard to color, zinc has a complicated influence.[61] Zinc oxide as a mixture with about 0.5% iron(III) oxide (Fe2O3) is called calamine and is used in calamine lotion. Two minerals, zincite and hemimorphite, have been historically called calamine. When mixed with eugenol, a ligand, Zinc oxide eugenol is formed, which has applications as a restorative and prosthodontic in dentistry.[17][64] Reflecting the basic properties of ZnO, fine particles of the oxide have deodorizing and antibacterial[65] properties and for that reason are added into materials including cotton fabric, rubber, oral care products,[66][67] and food packaging.[68][69] Enhanced antibacterial action of fine particles compared to bulk material is not exclusive to ZnO and is observed for other materials, such as silver.[70] This property results from the increased surface area of the fine particles. Zinc oxide is widely used to treat a variety of skin conditions, including dermatitis, itching due to eczema, diaper rash and acne. It is used in products such as baby powder and barrier creams to treat diaper rashes, calamine cream, anti-dandruff shampoos, and antiseptic ointments.[54][71] It is also a component in tape (called "Zinc oxide tape") used by athletes as a bandage to prevent soft tissue damage during workouts.[72] Zinc oxide can be used[73] in ointments, creams, and lotions to protect against sunburn and other damage to the skin caused by ultraviolet light (see sunscreen). It is the broadest spectrum UVA and UVB absorber[74][75] that is approved for use as a sunscreen by the U.S. Food and Drug Administration (FDA),[76] and is completely photostable.[77] When used as an ingredient in sunscreen, Zinc oxide blocks both UVA (320–400 nm) and UVB (280–320 nm) rays of ultraviolet light. Zinc oxide and the other most common physical sunscreen, titanium dioxide, are considered to be nonirritating, nonallergenic, and non-comedogenic.[78] Zinc from Zinc oxide is, however, slightly absorbed into the skin.[79] Many sunscreens use nanoparticles of Zinc oxide (along with nanoparticles of titanium dioxide) because such small particles do not scatter light and therefore do not appear white. There has been concern that they might be absorbed into the skin.[80][81] A study published in 2010 found a 0.23% to 1.31% (mean 0.42%) of blood zinc levels in venous blood samples could be traced to zinc from ZnO nanoparticles applied to human skin for 5 days, and traces were also found in urine samples.[82] In contrast, a comprehensive review of the medical literature from 2011 says that no evidence of systemic absorption can be found in the literature.[83] Zinc oxide nanoparticles can enhance the antibacterial activity of ciprofloxacin. It has been shown that nano ZnO that has an average size between 20 nm and 45 nm can enhance the antibacterial activity of ciprofloxacin against Staphylococcus aureus and Escherichia coli in vitro. The enhancing effect of this nanomaterial is concentration dependent against all test strains. This effect may be due to two reasons. First, Zinc oxide nanoparticles can interfere with NorA protein, which is developed for conferring resistance in bacteria and has pumping activity that mediate the effluxing of hydrophilic fluoroquinolones from a cell. Second, Zinc oxide nanoparticles can interfere with Omf protein, which is responsible for the permeation of quinolone antibiotics into the cell.[84] Zinc oxide is a component of cigarette filters. A filter consisting of charcoal impregnated with Zinc oxide and iron oxide removes significant amounts of hydrogen cyanide (HCN) and hydrogen sulfide (H2S) from tobacco smoke without affecting its flavor.[60] Zinc oxide is added to many food products, including breakfast cereals, as a source of zinc,[85] a necessary nutrient. (Zinc sulfate is also used for the same purpose.) Some prepackaged foods also include trace amounts of ZnO even if it is not intended as a nutrient. Zinc oxide was linked to dioxin contamination in pork exports in the 2008 Chilean pork crisis. The contamination was found to be due to dioxin contaminated Zinc oxide used in pig feed.[86] Zinc white is used as a pigment in paints and is more opaque than lithopone, but less opaque than titanium dioxide.[11] It is also used in coatings for paper. Chinese white is a special grade of zinc white used in artists' pigments.[87] The use of zinc white (Zinc oxide) as a pigment in oil painting started in the middle of 18th century.[88] It has partly replaced the poisonous lead white and was used by painters such as Böcklin, Van Gogh,[89] Manet, Munch and others. It is also a main ingredient of mineral makeup (CI 77947).[90] Micronized and nano-scale Zinc oxide and titanium dioxide provide strong protection against UVA and UVB ultraviolet radiation, and are used in suntan lotion,[91] and also in UV-blocking sunglasses for use in space and for protection when welding, following research by scientists at Jet Propulsion Laboratory (JPL).[92] Paints containing Zinc oxide powder have long been utilized as anticorrosive coatings for metals. They are especially effective for galvanized iron. Iron is difficult to protect because its reactivity with organic coatings leads to brittleness and lack of adhesion. Zinc oxide paints retain their flexibility and adherence on such surfaces for many years.[60] ZnO highly n-type doped with aluminium, gallium, or indium is transparent and conductive (transparency ~90%, lowest resistivity ~10−4 Ω·cm[93]). ZnO:Al coatings are used for energy-saving or heat-protecting windows. The coating lets the visible part of the spectrum in but either reflects the infrared (IR) radiation back into the room (energy saving) or does not let the IR radiation into the room (heat protection), depending on which side of the window has the coating.[13] Plastics, such as polyethylene naphthalate (PEN), can be protected by applying Zinc oxide coating. The coating reduces the diffusion of oxygen with PEN.[94] Zinc oxide layers can also be used on polycarbonate in outdoor applications. The coating protects polycarbonate from solar radiation, and decreases its oxidation rate and photo-yellowing.[95] Zinc oxide depleted in 64Zn (the zinc isotope with atomic mass 64) is used in corrosion prevention in nuclear pressurized water reactors. The depletion is necessary, because 64Zn is transformed into radioactive 65Zn under irradiation by the reactor neutrons.[96] Zinc oxide (ZnO) is used as a pretreatment step to remove hydrogen sulfide (H2S) from natural gas following hydrogenation of any sulfur compounds prior to a methane reformer, which can poison the catalyst. At temperatures between about 230–430 °C (446–806 °F), H2S is converted to water by the following reaction: The zinc sulfide (ZnS) is replaced with fresh Zinc oxide when the Zinc oxide has been consumed.[97] ZnO has wide direct band gap (3.37 eV or 375 nm at room temperature). Therefore, its most common potential applications are in laser diodes and light emitting diodes (LEDs).[100] Some optoelectronic applications of ZnO overlap with that of GaN, which has a similar band gap (~3.4 eV at room temperature). Compared to GaN, ZnO has a larger exciton binding energy (~60 meV, 2.4 times of the room-temperature thermal energy), which results in bright room-temperature emission from ZnO. ZnO can be combined with GaN for LED-applications. For instance as transparent conducting oxide layer and ZnO nanostructures provide better light outcoupling.[101] Other properties of ZnO favorable for electronic applications include its stability to high-energy radiation and its possibility to be patterned by wet chemical etching.[102] Radiation resistance[103] makes ZnO a suitable candidate for space applications. ZnO is the most promising candidate in the field of random lasers to produce an electronically pumped UV laser source. The pointed tips of ZnO nanorods result in a strong enhancement of an electric field. Therefore, they can be used as field emitters.[104] Aluminium-doped ZnO layers are used as transparent electrodes. The components Zn and Al are much cheaper and less toxic compared to the generally used indium tin oxide (ITO). One application which has begun to be commercially available is the use of ZnO as the front contact for solar cells or of liquid crystal displays.[105] Transparent thin-film transistors (TTFT) can be produced with ZnO. As field-effect transistors, they even may not need a p–n junction,[106] thus avoiding the p-type doping problem of ZnO. Some of the field-effect transistors even use ZnO nanorods as conducting channels.[107] Zinc oxide nanorod sensors are devices detecting changes in electric current passing through Zinc oxide nanowires due to adsorption of gas molecules. Selectivity to hydrogen gas was achieved by sputtering Pd clusters on the nanorod surface. The addition of Pd appears to be effective in the catalytic dissociation of hydrogen molecules into atomic hydrogen, increasing the sensitivity of the sensor device. The sensor detects hydrogen concentrations down to 10 parts per million at room temperature, whereas there is no response to oxygen.[108][109] ZnO has also been considered for spintronics applications: if doped with 1–10% of magnetic ions (Mn, Fe, Co, V, etc.), ZnO could become ferromagnetic, even at room temperature. Such room temperature ferromagnetism in ZnO:Mn has been observed,[110] but it is not clear yet whether it originates from the matrix itself or from secondary oxide phases. The piezoelectricity in textile fibers coated in ZnO have been shown capable of fabricating "self-powered nanosystems" with everyday mechanical stress from wind or body movements.[111][112] In 2008 the Center for Nanostructure Characterization at the Georgia Institute of Technology reported producing an electricity generating device (called flexible charge pump generator) delivering alternating current by stretching and releasing Zinc oxide nanowires. This mini-generator creates an oscillating voltage up to 45 millivolts, converting close to seven percent of the applied mechanical energy into electricity. Researchers used wires with lengths of 0.2–0.3 mm and diameters of three to five micrometers, but the device could be scaled down to smaller size.[113] ZnO is a promising anode material for lithium-ion battery because it is cheap, biocompatible, and environmentally friendly. ZnO has a higher theoretical capacity (978 mAh g−1) than many other transition metal oxides such as CoO (715 mAh g−1), NiO (718 mAh g−1) and CuO (674 mAh g−1).[114] As a food additive, Zinc oxide is on the U.S. FDA's list of generally recognized as safe, or GRAS, substances.[115] Zinc oxide itself is non-toxic; however it is hazardous to inhale Zinc oxide fumes, as generated when zinc or zinc alloys are melted and oxidized at high temperature. This problem occurs while melting brass because the melting point of brass is close to the boiling point of zinc.[116] Exposure to Zinc oxide in the air, which also occurs while welding galvanized (zinc plated) steel, can result in a nervous malady called metal fume fever. For this reason, typically galvanized steel is not welded, or the zinc is removed first.[117] Chevrolet Cruze The Chevrolet Cruze is a compact car that has been made by the Chevrolet division of General Motors since 2008. The nameplate has been used previously in Japan, for a version of a subcompact hatchback car produced under a joint venture with Suzuki from 2001 to 2007, and was based on the Suzuki Ignis. Since 2009, the Cruze nameplate has designated a globally developed, designed, and manufactured four-door compact sedan—complemented by a five-door hatchback body variant (in 2010 it also replaced the Chevrolet Cobalt)[1] from 2011, and a station wagon in 2012. The Cruze was actually released earlier in 2008 to the South Korean market under the name Daewoo Lacetti Premiere until the phasing out of the Daewoo brand in favor of Chevrolet in 2011. In Australia, the model has been on sale since 2009 as the Holden Cruze. This new generation Cruze does not serve as a replacement for the Suzuki-derived Japanese market predecessor. Instead, it replaces three other compact models: the Chevrolet Optra sold internationally under various names (such as the Suzuki Forenza in the United States, Chevrolet Optra in Latin America and Canada, Chevrolet/Daewoo Lacetti in Europe and Asia), the Chevrolet Cobalt, sold exclusively in North America, and the Opel-sourced, Australia-market Holden Cruze (later returning in 2012 briefly as an Opel until the next year, and again in 2015 as a Holden). While production of the Cruze in the United States and Mexico ended in 2019, the car is still produced and sold in other markets worldwide. Before the release of the global Chevrolet Cruze compact sedan in 2008, General Motors made use of the name "Cruze" between 2001 and 2008 in Japan. Announced as the Chevrolet YGM1 concept car at the Tokyo Motor Show in 1999, the original Cruze was derived from the subcompact Suzuki Ignis five-door hatchback (known as the Suzuki Swift in Japan).[2][3] Despite the Chevrolet branding, the YGM1, like the production car, was the work of GM's Australian arm, Holden.[4][5] Along with the styling, Holden executed most of the engineering work and were responsible for devising the "Cruze" nameplate.[2] The Cruze came either with a 1.3- or 1.5-liter engine coupled to either five-speed manual or four-speed automatic transmissions. Manufactured by Suzuki in Japan,[6] GM revealed the production Chevrolet Cruze in October 2001, with Japanese sales commencing the following month.[7] From 2002 through to 2006 this generation of Cruze was sold in Australia and New Zealand. In New Zealand this generation Cruze was sold as the Chevrolet Cruze.[8] The production Cruze had standard front-wheel drive, with all-wheel drive optional.[7][9] Chevrolet pursued a marketing strategy that positioned the high-riding Cruze as a light-duty sport utility vehicle (SUV).[10] This contrasted with Suzuki's approach with the Ignis marketed as a conventional passenger model.[9] From 2003, Suzuki of Europe began manufacturing the Cruze as the Suzuki Ignis—representing a facelift of the original Ignis, but only for European markets.[11] In 2008, GM introduced the Cruze compact car, carrying the "J300" internal designation.[13] This J300 iteration serves as a replacement for the Chevrolet Cobalt, Daewoo Lacetti and Holden Astra—all unrelated cars. GM phased out production of the Cobalt and its badge engineered counterpart, the Pontiac G5 in 2010, just prior to the manufacturing of the Chevrolet Cruze was to commence. The first renderings of the Cruze were revealed by GM at a press conference on July 15, 2008,[14] with the first official images released on August 21, 2008.[15] Cruze production sites include Gunsan, Jeonbuk, South Korea;[16] Saint Petersburg, Russia;[17] Shenyang, China;[18] and Halol, India;[19] Hanoi, Vietnam since April 2010 in complete knock-down (CKD) form,[20][21] Ust-Kamenogorsk, Kazakhstan from May 2010;[22] Rayong, Thailand after December 2010,[23] and São Caetano do Sul, Brazil from 2011.[24] Holden's localized hatchback version of the Cruze built at the Elizabeth, South Australia factory from late 2011 joined the Cruze sedan manufactured there since March 2011.[16] GM in the United States has upgraded the existing plant in Lordstown, Ohio to manufacture the Cruze, investing more than US$350 million.[25] At the ceremony of the start of production of Cruze at Ohio, Mark Reuss, the president of GM's North American operations said, "This is everything for us". It is described as GM's most significant new vehicle introduction into North America since the Chapter 11 reorganization in 2009, and is GM's latest attempt to build a small size car that North American consumers would "buy because they like it – not simply because it is cheap".[26] Underpinned by the front-wheel drive GM Delta II platform, GM has confirmed the Cruze development program occurred under a global design and engineering team.[27] GM Daewoo in South Korea played a leading role in the design and engineering of the Cruze, along with GM's German-based Opel division.[28][29][30] This development program spanned over 27 months at a cost of US$4 billion. A total of 221 prototypes were tested in Australia, Canada, China, South Korea, Sweden, the United Kingdom and the United States.[31] According to GM, the Cruze's body structure is 65 percent high-strength steel.[32] MacPherson struts are utilized in the front suspension with a solid torsion beam axle for the rear, avoiding the cost and complexity needed for a modern multi-link independent rear suspension used by some more expensive rivals.[31][33] According to GM's global product development chief Mark Reuss, the North America version Cruze is modified from the global platform as it requires reinforcements to the engine compartment because it offers a bigger engine than in other markets and uses torsion beam suspension.[34] Hydraulically-assisted (electric for North American market) rack and pinion steering gives for a 10.9-meter (36 ft) turning circle. Braking-wise, ventilated front, and solid rear disc brakes are employed, both using piston steel calipers.[31] To counteract noise, vibration, and harshness, engineers have designed the Cruze with an isolated four-point engine mount and implemented sound damping material in areas including the front-of-dashboard panel, luggage compartment, decklid internals, doors, carpet and headlining. Further noise suppression through the use of a triple-layer sealing system in the doors has also been employed.[31] A five-door Cruze hatchback was unveiled as a concept car at the 2010 Paris Motor Show on October 1, 2010. Cruze hatchback sales began in Europe in mid-2011.[35][36] Holden in Australia were responsible for the design and development of the hatchback body variant.[37] GM unveiled the Chevrolet Cruze station wagon in February 2012 at the Geneva Motor Show. Load space ranges from about 500 litres (20 cu ft) up to the window line in the rear, to nearly 1,500 litres (50 cu ft) up to the roof top with the rear seats folded down.[38] The Australasian New Car Assessment Program (ANCAP) announced in May 2009 that it has awarded the Cruze a full five stars in their independent crash safety test, with 35.04 out of a possible 37 points.[39] The following July, the China New Car Assessment Program (C-NCAP) awarded the Cruze a maximum five stars in their independent crash safety test. The Cruze SE 1.6-liter tested scored a maximum of 16 points in side-impact collision, 14.44 in front-end collision, and 15.73 in the 40 percent frontal offset collision.[40] Euro NCAP released its rating in November, with the Chevrolet Cruze again receiving the full five-star grading. While the Cruze scored 96 percent for adult protection, and 84 percent for child occupant protection, Euro NCAP's figure for pedestrian protection is quoted at significantly lower 34 percent.[41] In December 2009, the South Korean-specification Cruze—the Daewoo Lacetti Premiere—received the top rating of five stars from the Korean New Car Assessment Program (KNCAP). According to KNCAP, the Lacetti Premiere received the five-star rating in the frontal, offset frontal, side, and whiplash tests.[42] In the United States, the Cruze received the highest possible ratings of "good" in front, side, rear and rollover crash protection tests by the Insurance Institute for Highway Safety (IIHS), which has recognized the Cruze as a 2011 Top Safety Pick.[43] Moreover, The National Highway Traffic Safety Administration (NHTSA) has awarded the Cruze its highest five-star rating for safety. The score is broken down into maximum five-star results for frontal impact (driver and passenger), side impact (driver and passenger), and for the side pole test (driver). The NHTSA certified the Cruze's rollover rating at four out of five stars.[44] GM announced in April 2011 that 2,100 North American-market Cruze models would be recalled following a report of the steering wheel breaking away from the steering column during motion.[45] According to Consumer Reports, during its first year, the Cruze scored the lowest in reliability among compact sedans.[46] General Motors ordered a recall on June 22, 2012, for 413,418 Cruze models, manufactured at the Lordstown, Ohio plant United States, due to a risk of engine compartment fires. The recall covered 2011 and 2012 model year Cruze sedans from September 2010 through May 2012 and affected vehicles sold in the United States, Canada and Israel.[47] The problem can result when liquids become trapped near the engine and catch fire. In Australia 9,547 Australian-built Cruzes were also recalled but there were no cases of engine fires reported in Australia.[48] General Motors (GM) ordered a recall on August 16, 2013, for 292,879 model year 2011 and 2012 Cruze models, manufactured at the Lordstown, Ohio plant United States. General Motors told the National Highway Traffic Safety Administration the recall was due to a potential intermittent loss of brake assist in Cruze models featuring the combination of the 1.4-liter dual overhead cam gasoline turbo engine and the 6T40 FWD automatic transmission. GM said it was aware of 27 alleged low-speed crashes due to brake issues that may include this particular issue, but it reported no injuries. To address the issue, GM said dealers will replace a micro-switch in the power brake vacuum pipe assembly.[49] As of March 28, 2014, GM has halted the sale of 2013 and 2014 Chevrolet Cruze compacts with 1.4-liter engines—models that account for about 60% of Cruze sales—but GM originally didn't say why it issued the order.[50] GM later announced that the cars were being recalled due to a faulty right-side axle shaft.[51] The Cruze was given a mild facelift for 2013 and unveiled by GM at the 2012 Busan Motor Show, South Korea. The Cruze received an updated front fascia, with the air vents around the foglamps being redesigned entirely, and the grille and headlights also receiving minor updates. New alloy wheels have also been designed for the Cruze. GM's optional MyLink entertainment system is now offered as well. This model was first sold in Korea, then Malaysia. It will be later sold in other markets.[52][53] 2012 facelift (wagon) 2012 facelift (wagon) 2012 facelift (sedan) 2012 facelift (hatchback) On April 12, 2014, Chevrolet announced that it would unveil a refreshed Chevrolet Cruze at the 2014 New York Auto Show as a 2015 model, with an updated grille and a more angular shape from the Malibu.[54] For the Asian version, the rear end was completely restyled with new lamps, trunk lid, and rear bumper. The Chinese version of the facelift shares the same rear end design, but has an exclusive redesigned front end also featuring new front lamps. 2014 facelift (sedan; China) 2014 facelift (sedan; Asia) 2014 facelift (sedan; Asia) 2014 facelift (sedan; US) 2014 facelift (hatchback) Engines fitted to the Cruze are the 1.6-liter Family 1 inline-four, a 1.8-liter version of the same, and a 2.0-liter VM Motori RA 420 SOHC turbocharged common rail diesel, marketed as VCDi.[55] All three engines are coupled to a five-speed manual or optional six-speed automatic transmission featuring Active Select.[56][57] When the Cruze launched in the United States in 2010, a new 1.4-liter Family 0 turbocharged gasoline engine was introduced.[58][59] North American models fitted with the 1.8-liter gasoline engine have also been upgraded to a standard six-speed manual.[58] In 2011, a new 2.0-liter Family Z diesel engine marketed as VCDi replaced the previous VM Motori VCDi unit of equal displacement.[60] Since late 2011, Chinese market models have been available with a turbocharged 1.6-liter engine with a six-speed manual transmission. The Chevrolet Cruze was launched in the Egyptian market during mid-2009.[61] South African sales of the Cruze commenced in September 2009.[62] South Korean-market versions of the Cruze entered production there in 2008 as the "Daewoo Lacetti Premiere".[13] The Lacetti debuted on October 30, 2008, featuring the 1.6-liter naturally aspirated engine.[63] On January 30, 2009, GM Daewoo introduced the turbodiesel engine variant.[64] Inline with the February 2011 renaming of "GM Daewoo" to "GM Korea", the Lacetti Premiere adopted the international "Chevrolet Cruze" name from March 2, 2011.[65][66] For the owners of the previous model, Lacetti, GM Korea decided to replace the old emblem to that of Chevrolet for free.[67] The Chevrolet Cruze was launched in the Chinese market on April 18, 2009 as a sedan[68] manufactured at GM India's Halol factory.[19] Transmission choices were a five-speed manual or a six-speed automatic along with 1.6- or 1.8-liter engines. The sedan range consisted of the 1.6 SL, 1.6 SE, 1.8 SE (automatic only) and 1.8 SX (automatic only). Hatchback models were introduced in 2013 available with the 1.6-liter or 1.6-liter turbo engines. The Chevrolet Cruze was released in India on October 12, 2009.[69] It was offered in only two versions: LT and LTZ in diesel form only (VCDi).[70] During 2009, there were reports that the Cruze was to become available in Malaysia with the 1.6 and a 1.8-liter engines.[71] The Naza automotive group in Malaysia has announced that it's expecting to launch the Cruze in the Malaysian market for the first time in the second quarter of 2010 and they are expecting to sell 1,200 to 1,500 units in 2010.[72] In Thailand, the car launch in December 2010, built at GM's Rayong facility. Specification levels comprised: Base (1.6-liter), LS (1.6- and 1.8-liter), LT and LTZ (1.8-liter), 6-speed automatic are standard in all models except 1.6 Base used 5-speed manual, with an optional 2.0-liter VCDi available on LTZ variant with 6-speed automatic. The Australian arm of GM, Holden, announced at the Melbourne International Motor Show on February 27, 2009 that sales of the South Korean-produced Cruze would begin under the Holden brand.[73] Replacing the Holden Viva, the Cruze reached dealerships on June 1.[74] The Cruze also became the replacement for the Holden Astra, dropped from the Holden lineup the following August.[75][76] Given the model designation JG, the Holden Cruze was launched with the 1.8-liter petrol engine and optional 2.0-liter turbodiesel. Both engines are mated to the five-speed manual transmission or optional six-speed automatic. Electronic stability control (ESC), seat belt pretensioners and six airbags were standard fitment across the range.[32] Specification-wise, the "CD" opened up the range, finished off with a more upmarket "CDX". The "CD" equipment list comprised 16-inch steel wheels, air conditioning, cruise control, a trip computer, power windows and automatic headlamps.[77] "CDX" versions add 17-inch alloy wheels, front fog lamps, a leather-wrapped steering wheel and upholstery, heated front seats and rear parking sensors.[78] Initially a petrol-only model, diesel availability was expanded to the "CDX" trim in early 2010.[79][80] On March 18, 2010, Holden issued a recall for 9,098 petrol-engined 2010 model year Cruzes in Australia and a further 485 in New Zealand over a faulty fuel hose. According to Holden, some hoses on 1.8-liter cars had developed a leakage, although no accidents or injuries had been reported prior to the recall.[81] The recall followed a stop-delivery notice issued by Holden to its dealers on March 3 while the automaker conducted an investigation into the matter.[82] Holden announced on December 22, 2008 that its Elizabeth, South Australia production line would be split to commence local production of the Cruze sedan and the Australian-developed hatchback.[83] Production was originally scheduled to start by September 2010.[84] However, it was confirmed in January 2010 that production would in fact begin in March 2011.[85] The announcement to assemble the car came as a response to the slowing sales of the larger, locally produced Holden Commodore range.[83] The Australian Government committed A$149 million to the program from its $6.2 billion Green Car Innovation Fund, with a further $30 million given by the State Government of South Australia.[83] GM Holden matched both amounts, but the then chairman and managing director Mark Reuss would not reveal Holden's total investment.[83] Reuss announced his firm would be considering the utilization of liquid petroleum gas (LPG), compressed natural gas (CNG), ethanol (E85) flexible-fuel and petrol/electric hybrid start-stop system powertrain technologies.[83][86] These technologies, if materialized, would supplement the four-cylinder petrol and diesel powertrain offerings already confirmed by Holden at the time of the announcement.[87] At a media event on February 28, 2011, Holden unveiled the Australian assembled Cruze sedan in facelifted "Series II" guise,[88] otherwise known as the JH series.[89] Prime Minister Julia Gillard attended the February launch to drive the first example off Holden's production line before full-scale production commenced in March.[16] Holden has confirmed an initial local content level of between 40 and 50 percent if assessed by retail value, with an aim of increasing Cruze localization over time.[90][91] Series II styling revisions to the grille, lower air intake, and bumper have softened the front-end to bear a closer resemblance to Holden's larger VE II Commodore.[88] Further differentiation from the original has been achieved via the fitment of amber front indicator lights, jewelled bezel headlamps, remodelled wheel trims, and through adjustments to the lower portion of the rear bumper.[88][92] Carrying over largely unchanged is the 1.8-liter petrol inline-four, tweaked to yield slight enhancements in drivability.[93] When automatic transmission is specified, the 1.8-liter is now teamed with GM's six-speed 6T30 unit, lighter and more compact than the previous 6T40.[89] Diesel remains optional for "CD" and "CDX" specifications over the standard 1.8-liter petrol.[93] Alterations to the 2.0-liter turbodiesel have resulted in an additional 10 kilowatts (13 hp) and 40 newton metres (30 lb⋅ft) and a slight reduction in fuel consumption for the manual variant, now a six-speed unit.[92] However, the headline change is the release of the turbocharged 1.4-liter engine, dubbed iTi by Holden for intelligent turbo induction.[92] The inclusion of the 1.4 also brings an upgrade to electric (as opposed to hydraulic) power steering and affixes a Watt's linkage to the torsion beam rear suspension.[16] Linked with six-speed manual or automatic transmissions, the 1.4 is fitted as standard to the new "SRi" and "SRi-V" sports-oriented trims, but is available at extra cost on the base "CD".[92] The new "SRi" and "SRi-V" models have their respective badges embossed onto the grille insert, are fitted with their own front bumper design, and feature side skirts, chrome exterior door handles, a rear lip spoiler, and five-spoke 17-inch alloy wheels.[88][94] Over the "CD", "SRi" gains a leather-covered steering wheel rim and shift lever, with the "SRi-V" extending this upholstering to the seating.[94] A heat function for the front seats, keyless entry with push-button engine start, reversing sensors, and seven-inch LCD multimedia unit are also part of the "SRi-V" equipment list.[94] This multimedia system integrates satellite navigation, the CD and DVD players, and a 10 GB internal hard disk drive.[94] In mid-November 2011, Holden released the MY12 update to the Series II Cruze. This update coincided with the release of the hatchback body variant and saw Bluetooth telephone connectivity standard across the range.[95] In April 2013 the Series II Cruze received an update and price drops. The update included rear-parking sensors, a 7-inch touch-screen, suspension adjustments and improved automatic gearboxes across the range along with many other new extras such as a larger 1.6-liter turbocharged engine as standard on the SRi and SRi-V, replacing the 1.4-liter turbo. Holden ended manufacturing of the Cruze at its Elizabeth plant on October 7, 2016, replaced by the Astra hatchback and new generation Cruze sedan—both imported.[96] European specification variants of the Cruze are offered with 1.6- and 1.8-litre petrol engines, and 2.0-litre and (from 2012) 1.7-litre diesel engines. In mid-2011, with the arrival of the five-door hatchback variant, the 1.6-litre petrol engine received an upgrade from 113 bhp to 122 bhp. Exports from the South Korean factory began on February 24, 2009.[97][98] Mexico became the first North American country to receive the car, going on sale for the 2010 model year in late 2009. Imported from South Korea, the Chevrolet Cruze in Mexico replaces both the Chevrolet Astra (last sold in 2008) and Optra as the compact offering there.[99][100] The US and Canadian version of the Chevrolet Cruze entered limited production at Lordstown, Ohio, in July 2010 as a 2011 model, replacing the Chevrolet Cobalt.[101] Full production began September 8, 2010.[102] For these markets, the Cruze utilizes a more advanced Watts Z-link rear suspension from the Opel Astra (J).[103] Offered in LS, LT, LTZ, and Eco trim lines,[104] both the 1.8-liter and the turbocharged 1.4-liter engines are offered, coupled with either a six-speed manual or automatic transmission.[104] With a starting price slightly higher than most compact competitors, the base model Cruze LS is equipped with the 1.8-liter gasoline engine and comes with air-conditioning and power locks, the higher-level LT and LTZ models is fitted with the 1.4-liter turbocharged gasoline engine.[26][105] For the Eco model, aerodynamic improvements have been made such as an electronically controlled air shutter that adjusts air flow to the engine depending on the temperature, wind speed and tow weight.[106] To save weight, Chevrolet replaces the space saving spare tire and jack on the Eco model with a tire inflator kit, reducing weight by 12 kilograms (26 lb).[107] Standard safety equipment includes electronic stability control and ten airbags, including side rear-seat and front knee airbags not fitted on models produced in the original South Korea facility.[108] The Cobalt's badge engineered twin, the Pontiac G5, has not been replaced by a Cruze-based equivalent, due to the Pontiac brand being phased out during 2010.[109] The Cruze is built on the production lines that were used to build the Cobalt and Pontiac G5 in Lordstown, Ohio.[110] Cobalt production ended in June 2010 and the Cruze started production in July 2010. GM has allocated three shifts to produce the Cruze and it arrived to dealers in September 2010, giving all dealers time to deplete their inventories of Cobalts.[111] Changes to the North American-built Cruze for model year 2012 include the availability of the six-speed manual transmission for the 1.4-litre turbocharged engine, plus models not equipped with power front seats no longer have the front seat cushion tilt option.[112] Starting with the 2014 model year, Chevrolet offered the Cruze with the clean diesel engine option for North America. With a starting price of $25,695, the Cruze diesel 2.0-liter Multijet engine got 44 mpg on the highway and 27 mpg in the city, while producing 148 hp (110 kW) and 258 lb⋅ft (350 N⋅m), mated to a six-speed automatic transmission.[113] The 2014 Chevy Cruze Clean Turbo Diesel, direct from the factory, will be rated for up to B20 (blend of 20% biodiesel / 80% regular diesel) biodiesel compatibility.[114] The Cruze diesel was the first GM passenger car in the US equipped with a diesel engine in 28 years, however sales were weaker than expected with 2% of US models.[115] For 2016, the first generation Cruze continued as a fleet and rental exclusive model in the United States, billed as Cruze Limited. The diesel model was discontinued, but a new chrome appearance package was offered.[116] Between 2011 and 2016, the first-generation Chevrolet Cruze was available in several different trim levels: L : The L, introduced in 2015, was positioned below the previously-base LS. Offering identical standard equipment as the LS described below, the base L omitted the standard front and rear carpeted floor mats that were standard equipment on the LS, and was only offered with a six-speed manual transmission. LS: Between 2011 and 2014, the LS was the base Cruze trim level, until the base L was added in 2015. Standard features of the LS included fifteen-inch black-painted steel wheels with full wheel covers, air conditioning, power windows and door locks, keyless entry, premium cloth seating surfaces, dual manually-adjustable front bucket seats, an A/M-F/M stereo with single-disc CD/MP3 player and auxiliary audio input jack with a six-speaker audio system, front and rear carpeted floor mats, tilt-and-telescopic vinyl-wrapped steering wheel with integrated audio system controls, a 1.8L EcoTec Inline Four-Cylinder (I4) gasoline engine, a six-speed manual transmission, and a split-folding rear bench seat. Options included Bluetooth for hands-free telephone calls (no streaming audio capabilities), a six-speed automatic transmission, and SiriusXM Satellite Radio. 1LT: Between 2011 and 2016, the 1LT was the value-oriented Cruze model. It added the following equipment to the base LS: sixteen-inch aluminum-alloy wheels, SiriusXM Satellite Radio, cruise control, and the OnStar in-vehicle telematics system. Options included the 1.4L Turbocharged EcoTec Inline Four-Cylinder (I4) gasoline engine, a six-speed automatic transmission, Chevrolet MyLink seven-inch touch-screen infotainment system (2013+ models only), Bluetooth for hands-free telephone calls (no streaming audio capabilities) (later standard equipment in 2013+ models), a 290-watt premium amplified Pioneer audio system, a power-adjustable front driver's bucket seat, a power sunroof, a leather-wrapped steering wheel, and the RS Package. Eco: The Eco trim level, offered between 2011 and 2016 and based on the 1LT, was geared towards consumers who wanted a Cruze with higher fuel economy ratings. It added the following equipment to the 1LT: active front grille shutters, Bluetooth for hands-free telephone calls (no streaming audio capabilities- for 2011–2013 models only), Chevrolet MyLink touch-screen infotainment system (2013+ models only), a 1.4L Turbocharged EcoTec Inline Four-Cylinder (I4) gasoline engine, a power-adjustable front driver's bucket seat, a leather-wrapped steering wheel, and a rear-mounted spoiler. Additional options were identical to that of the 1LT, though the Eco also offered luxury leather-trimmed seating surfaces with dual heated front bucket seats as part of an optional package. 2LT: The 2LT trim level, offered between 2011 and 2016, added additional luxury and convenience features to the 1LT: a 1.4L Turbocharged EcoTec Inline Four-Cylinder (I4) gasoline engine, seventeen-inch aluminum-alloy wheels, Chevrolet MyLink seven-inch color touch-screen infotainment system (2013+ models only), luxury leather-trimmed seating surfaces with dual heated front bucket seats, a leather-wrapped steering wheel, power sunroof, and a security system. Options included a six-speed automatic transmission, a 290-watt premium amplified Pioneer audio system, side blind zone alert with rear cross-traffic alert (2014+ models only), the RS Package, and GPS navigation. LTZ: The LTZ trim level, offered between 2011 and 2016, was the top-of-the-line Cruze trim level. It added additional luxury features to the 2LT, such as: premium aluminum-alloy wheels, remote start, a six-speed automatic transmission, and a 290-watt premium amplified Pioneer audio system. Options included GPS navigation and the RS Package. Diesel: The Diesel was a diesel-powered version of the 2LT trim level of the Cruze, available for 2014 and 2015 only. It added unique seventeen-inch aluminum-alloy wheels, a six-speed automatic transmission, remote start, and a 2.0L Turbocharged Inline Four-Cylinder (I4) diesel engine to the 2LT trim level. Options were identical to that of the 2LT trim level, though the RS Package was not available on the Diesel. The car was launched and began production for South America in 2011. The new model was first announced for the Chinese market on the 2014 Beijing Auto Show and went on sale in August 2014. Based on the same platform as the Cruze J300, the Cruze J400 CN was essentially a reskinned old model, and only sold for 2 years before the release of the international Cruze J400. The four-door sedan has a fastback-like sloping roofline and a low drag coefficient of 0.28 comes with a choice of a 1.4 L (1,399 cc) turbocharged direct injection engine with a power of 110 kW (150 hp) at 5,600 rpm and torque of 235 N⋅m (173 lb⋅ft) at 1,600–4,000 rpm, which can be mated with a six-speed manual transmission or seven-speed Start/Stop enabled dual-clutch gearbox, or a 1.5 L (1,490 cc) direct injection engine with a power of 84 kW (113 hp) at 5,600 rpm and torque of 146 N⋅m (108 lb⋅ft) at 6,000 rpm mated to a six-speed Start/Stop-capable automatic transmission. Both engines come from the new GM Small Gasoline Engine family. Weight reduction of 10% is achieved by using very high-strength steels and aluminum alloys. Watt's link torsion beam rear suspension, first used on the Opel Astra (J), comes as standard. The car comes equipped with a 4.2" color screen radio or MyLink 2.0 infotainment system with an 8" screen, and can be configured with OnStar Gen10 offering 4G LTE Internet connection with a built-in Wi-Fi hotspot.[124] The second-generation Cruze began sales in North America in early 2016,[125] delayed a year by engineering changes.[126] The Cruze has a new external design with a new split grille front and a fastback-like sloping roofline from the Chinese version of the fastback. This second generation Cruze has a slightly longer length and wheelbase than the one designed for the Chinese market, with different styling cues. It is also powered by the 1.4-liter turbocharged four-cylinder engine producing 153 hp (114 kW) and 177 lb⋅ft (240 N⋅m) torque. The 2016 Cruze comes equipped with both Apple CarPlay and Android Auto Capability features. However, only one of their phone brands at any one time can be used.[127][128] In January 2016, Chevrolet unveiled the five-door hatchback version of the North American Cruze at the North American International Auto Show. It went on sale in late 2016 as a 2017 model.[129][130] Trim levels continue to be L, LS, LT (now combined into one trim level, as opposed to the previous 1LT and 2LT designations), and Premier (replacing the previous LTZ trim level as the top-of-the-line Cruze trim level). Discontinued are the Eco and Diesel trim levels (at least for the time being). All trim levels include a seven-inch MyLink touch-screen infotainment system with A/M-F/M radio, USB integration, a 3.5-millimeter auxiliary audio input jack, optional SiriusXM satellite radio, voice control, and CarPlay and Android Auto capabilities, keyless entry, power windows, power door locks, OnStar with 4G/LTE/Wi-Fi Hotspot connectivity for multiple devices and RemoteLink via an app on the consumer's smartphone, air conditioning, and a 1.4-liter EcoTec inline-four engine. Higher trim levels (LT and Premier) also offer features such as the "RS Sport Package", alloy wheels, remote start, keyless access with push-button ignition, a premium sound system by Bose with external amplifier and subwoofer, an eight-inch MyLink infotainment system with GPS navigation, a power tilt-and-sliding sunroof, and power seats. However, only the top-of-the-line Premier trim level offers heated leather seating surfaces, premium alloy wheels, and other luxury features. The base L only offers a six-speed manual transmission, while the Premier, on the other end of the spectrum, offers only a six-speed automatic transmission. The LS and LT trim levels offer either a six-speed manual transmission, or a six-speed automatic transmission. A new diesel-powered Cruze will be available in 2017.[131] It will have the 1.6L turbodiesel also found in the 2018 Equinox, paired to either a nine-speed automatic or six-speed manual transmission.[132] Before launching the Chevrolet Cruze as the Holden Astra in Australia, Holden engineers performed 100,000 kilometres of suspension and steering testing at the Lang Lang Proving Ground south east of Melbourne, Australia, tuning for Australian roads. A firmer more compliant ride and more responsive steering tune is the result. Other major changes over the international model included revised front and rear bumpers, which aim to give it a similar look to the Holden Astra Hatch.[133] Unlike the Hatch, the sedan is offered in LS, LS+, LT and LTZ trim levels. All models are powered by a 1.4 L (1,399 cc) turbocharged direct injection engine with a power of 110 kW (150 hp) at 5,600 rpm and torque of 235 N⋅m (173 lb⋅ft) at 1,600–4,000 rpm mated to a 6 speed manual or 6-speed automatic.[133] For 2019, the Cruze received a mid-cycle facelift, which made its debut in April 2018, along with restyled versions of the 2019 Camaro, Spark and Malibu. Changes for the Cruze for 2019 include the addition of a lower-priced LS model for the Cruze Hatchback, the deletion of the six-speed manual transmission option (all Cruze models, including the previous manual-only L, will come equipped with an automatic transmission), all-new third-generation MyLink Systems, and a revised RS Package for LT and Premier models. The 2019 Cruze went on sale in November 2018.[134] Production of the D2LC-K Cruze ended in South Korea in July 2018, and in the US and Mexico in March 2019.[135][136] The Lordstown assembly plant was closed and sold to Lordstown Motors, while Ramos Arizpe Assembly will build the Chevrolet Blazer instead. Production of the Cruze in Argentina and China will continue. Assembly of the Chevrolet Cruze Sedan at Lordstown Assembly in Lordstown, Ohio concluded on Wednesday, March 6, 2019, when the last car rolled off of the assembly line.[137] Assembly plant workers at the plant wrote inspiring messages on the unfinished body underneath the paint, as well as signed one of the foam front seat cushions underneath the upholstery. GM turned down an offer from Cleveland automotive dealer owner Bernie Moreno to keep the plant open and continue building the Cruze under a five-year deal, which he hoped would be used to launch a ridesharing service featuring a fleet of Cruzes with two shifts under Moreno's direction. [138] Production in China ended in February 2020, following stronger sales of the Chevrolet Monza in that market.[139] This leaves Argentina as the sole remaining producer of the Cruze, for the Latin American market. Unlike its Chevrolet predecessors for the U.S. market—Cavalier, Prizm, and Cobalt—or its Daewoo predecessors for the South Korean market—Espero, Nexia, Nubira, and Lacetti—the Cruze has been discontinued without any announcement or plans for a replacement. The Chevrolet Cruze first entered the World Touring Car Championship in 2009 with a 2.0-litre naturally aspirated engine, taking six wins in its debut season.[140] The car has proved successful since its entry, with Yvan Muller winning the championship in 2010 and again in 2011 using the new 1.6-litre turbocharged engine. Chevrolet placed first, second and third in 2011, with Muller finishing ahead of teammates Rob Huff and Alain Menu. Chevrolet finished 1–2–3 again in 2012, this time, Huff becoming champion ahead of Menu and Muller. The Cruze also entered the British Touring Car Championship for 2010 and 2011. Jason Plato won the championship for Chevrolet in 2010 and finished 3rd in 2011.[141] The BTCC Cruze used the 2.0-liter naturally aspirated engine found in the original variant of the WTCC Cruze.[142] The Cruze won the Scandinavian Touring Car Championship in 2011, being run by NIKA Racing under the banner of 'Chevrolet Motorsport Sweden' with Rickard Rydell driving. Rydell and teammate Michel Nykjær finished second and third in 2012. Chevrolet pulled their sponsorship at the end of 2011 from the BTCC to support the Chevrolet team in the World Touring Car Championship for 2012.[citation needed] Chevrolet then announced they would not enter a works team for the 2013 WTCC season. For 2013 RML, the original builders of the Cruzes, continued to compete without the support of Chevrolet. Cars were also entered by Bamboo Engineering, NIKA Racing and Tuenti Racing Team. Despite no funding from the manufacturer, the Cruze remained the car to beat, even against works teams from Honda and Lada. Muller won his fourth WTCC title, his third in a Cruze and James Nash won the Yokohama Drivers' Trophy for independent entries, ahead of fellow Cruze drivers Alex MacDowall and Michel Nykjær. RML have confirmed they will build Cruzes to the new set of WTCC regulations for 2014, which sees the cars increase in power and feature greater aerodynamics. RML aim to build up to six cars. Confirmed recipients include Tom Chilton who has yet to announce a team to run his car, Bamboo Engineering who will run two cars and Campos Racing who will enter a car for Hugo Valente. The Cruze returned to the BTCC in 2013 in the hands of Joe Girling and Tech-Speed Motorsport, who loaned the car from Finesse Motorsport. The increase in performance of the Next Generation Touring Car entries meant the older Super 2000 specification cars like the Cruze were now too uncompetitive to compete for wins but were provided with their own category. Now running a 2.0-litre turbocharged NGTC-specification engine, Girling took one class win at Donington Park but missed the second half of the season. The car returned to Finesse Motorsport who entered the Knockhill round of the championship with Aiden Moffat driving. At sixteen years old, Moffat became the BTCC's youngest driver at 16 years, 10 months and 28 days. This was to be the S2000 Cruze's final appearance in the BTCC, as S2000 cars are to be abolished from 2014. Andy Neate entered the 2013 season with a new NGTC-specification Cruze, built by his own team, IP Tech Race Engineering and used an engine built by RML. The car made its debut at Snetterton and competed at several rounds towards the end of the season. The car has since been sold to Aiden Moffat, who will run the car with his own team for 2014. BTC Racing will enter a hatchback variant of the Cruze for 2014, driven by Chris Stockton. The car was originally intended to be used by Jason Plato in 2012 but RML and Chevrolet withdrew from the BTCC and mothballed the shell. BTC Racing acquired it and were initially included on the entry list for 2013 but the car was not finished in time and never appeared all season. The first-generation Chevrolet Cruze debuted in the Argentine TC 2000 in 2011, and the second-generation in 2016. Agustín Canapino won the 2016 championship. The Cruze diesel was the first GM passenger car equipped with a diesel engine in the American market in 28 years; however, sales were weaker than expected being 2% of Cruze models sold July through June of 2014.[115] In August 2014, Cruze sales reached the milestone of 3 million units sold worldwide, 16 months after passing the 2 million mark. The following table shows the top selling markets as of August 2014[update].[165] By April 2016, cumulative sales of cars of the Cruze name exceeded 4 million worldwide.[166] What%27s Up Fox What's Up Fox? (Korean: 여우야 뭐하니; RR: Yeo-u-ya Mwohani, also known as Foxy Lady) is a 2006 South Korean television series, starring Go Hyun-jung and Chun Jung-myung.[1] It aired on MBC from September 20 to November 9, 2006 on Wednesdays and Thursdays at 21:55 for 16 episodes. The romantic comedy explores age differences in relationships, in particular between a thirty-something woman and her best friend's brother who is nine years younger.[2][3] Go Byung-hee (Go Hyun-jung) is a 33-year-old woman working as a reporter for a third-rate magazine, but with the heart of a 24-year-old virgin girl. She frequently finds it hard to cope with the unexciting aspects of her life. She dreams of someday working for a company that she can be proud of, and of finding the man of her fantasies: someone her age who has a good educational background and is financially stable, who'll provide warm support when she needs it, and who'll go with her on a world tour in a campervan. One day, she gets into an accident with her friend Seung-hye's younger brother, Chul-soo (Chun Jung-myung), and begins to see him in a new light. Park Chul-soo is a 24-year-old high school graduate working as a mechanic at a car repair shop. Although Chul-soo does not seem to have much, he is filled with maturity and enjoys life by doing the things he loves: working as a mechanic and traveling. How many people actually find the love they dream of? Until they realize that true love is just around the corner, this couple continues to pursue this unique romantic relationship of "dating a friend's brother" and "dating my sister's friend." Source: TNS Media Korea It aired in Japan on cable channel KNTV from September 20 to November 9, 2006,[4] and cable channel WOWOW beginning April 19, 2007.[5] In Philippines year 2007 to aired on GMA 7 with Filipino dubbed version under the title of Foxy Lady premiere July 9, 2007 – September 21, 2007 with the OST local title With a Smile song by Eraserheads. Waste management in South KoreaWaste management in South Korea involves waste generation reduction and ensuring maximum recycling of the waste. This includes the appropriate treatment, transport, and disposal of the collected waste. South Korea's Waste Management Law was established in 1986, replacing the Environmental Protection Law (1963) and the Filth and Cleaning Law (1973).[1] This new law aimed to reduce general waste under the waste hierarchy (or three 'R's) in South Korea. This Waste Management Law imposed a volume-based waste fee system, effective for waste produced by both household and industrial activities (or municipal solid waste). The Waste Management Law began the regulation of systematic waste streams through basic principles in waste management practices, from reduction to disposal of waste. This law also encouraged recycling and resource conservation through a deposit-refund system and a landfill post-closure management system.[2] The Seoul Metropolitan Government (SMG) adapted the national policy on waste management to meet demands for an improved waste disposal system in the 1990s. In order to satisfy the public, Seoul concentrated its waste management policy on waste reduction and utilisation.[3] Originally, solid waste was not an environmental concern in South Korea. There was no concern for environmental hazards with amount of solid waste being generated and dumped in landfills. The South Korean government only changed an amount of waste disposal services for household waste disposal despite the large amount being generated.[4] This was significant during the Korean economic boom which created an increase in the production of municipal solid waste. Between 1970 and 1990, the amount of municipal solid waste generated grew from 12,000 tons to 84,000 tons per day. This led to the rise of waste disposal issues in South Korea.[5] The low recycling rate and increased solid waste generation contributed greatly to environmental pollution. As landfills were heavily relied on, the ground and water were polluted.[6] Air quality was also affected as landfills contributed to hazardous gas emissions with unpredicted fires.[6] The Nakdong river is one of the major streams in South Korea, a main drinking source in the Gyeongsang province.[7] Over the past decades, population growth and industrialisation along the Nakdong river has caused pollution of the stream. Industrial waste and sewage, along with urban and agricultural drainage, led to the deterioration of the river.[8] On March 1, 2008, a chemical factory explosion caused a phenol leak into the Nakdong river. The incident caused toxic substances to leak, leading to major health concerns for the public. Tests also found that formaldehyde also leaked into the river, but concluded that the harmful substances were diluted as the amount water discharged was increased.[9] This is the second time the river has been contaminated by phenol. In 1991, a phenol leak was the result of the bursting of an underground pipe, leaking pure phenol into the river. This disastrous leak rendered the water undrinkable. South Korea was previously careless with its dumping of waste into the water and air, and The Korea Times also discovered the illegal dumping of non-toxic waste along Nakdong river by 343 factories.[10] Water quality quickly became a priority, and water quality has slowly improved with the installation of water treatment plants.[11] Arisu is a water treatment plant found in Seoul. It is positioned as a safe tap water supply for the citizens of Seoul.[12] Arisu sources its water from the Han River, and it goes through several water tests to ensure drinkable water quality as recommended by the World Health Organisation (WHO).[13] Substances tested for include chlorine, iron, and copper. Arisu also manages water flow rate systematically, and controls water quality in purification centres.[14] Aside from that, the Seoul Metropolitan Government operates multiple water treatment plants and Sewage Treatment Centres to ensure improvement of water quality.[13] The volume-based waste fee system (VBWF) was implemented in 1995 by the Korean government. This was made in an attempt to reduce waste generation and encourage recycling amongst its citizens. Municipal waste is collected in synthetic bags, and recyclables are separated and sorted in recycling bins. All disposals, with an exception of recyclables, bulky items, and coal briquettes, are disposed according to the VBWF system. Items are measured with different volume sized bags, and citizens are then charged respectively.[15] A decade after the introduction and implementation of the VBWF system, waste generation rates were reduced, and recycling rates improved dramatically. The public's awareness for the environment increased and technologies for recycling improved. Decomposable bags were introduced, and excessive packaging of products was also reduced. Refillable products are now preferred to reduce the generation of waste.[6] The VBWF system increased Korean citizens' willingness to recycle, leading to a decreased burden on incineration or landfills.[16] Jongnyangje (Hangul: 종량제) is an organised waste management system for the effective collection and reuse of waste and resources in South Korea.[17] All waste must be separated into general waste, food waste, recyclable items, or bulky items.[18] Bulky items consist of waste that are too big to fit into the issued disposal bags, such as furniture, electrical appliances, and office items. These bulky items require special stickers attainable from district offices. Recycling is necessary in South Korea, and recyclable items are divided according to material type, from paper to plastics.[19]Food waste is collected separately from general waste in special bags. These food disposal bags are known as eumsigmul sseulegi bongtu (Hangul: 음식물 쓰레기 봉투), and prices of these bags vary by size and district.[18] A monthly fee respective to the amount of food wasted is then charged to each household, enabled through a Radio Frequency Identification (RFID) card.[20] Apart from charging fees for food wastage, South Korea also reduces food waste by reprocessing collected food into livestock feed.[21] Since the South Korean government banned the dumping of food waste in landfills in 2005 and implemented food waste recycling in 2013, the amount of food waste being recycled increased dramatically.[21] Citizens are encouraged to include only what animals can eat in these food disposal bags; bones, pits of fruits, and seeds, hence cannot be as considered food waste.[18] The collected waste is then dried out and repurposed into feed appropriate for animal consumption.[22] Some food waste is turned into a fertiliser or food waste compost instead, after it has gone through processing and all moisture is removed.[23] This fermented food waste fertiliser is an eco-friendly and organic option in cultivation of crops. South Korea ranks second in place for largest waste producer worldwide, with South Koreans using an average of 420 plastic bags annually.[24] In order to counter this, South Korea banned all single-use plastic bags in supermarkets. Alternatives such as paper bags, multiple-use cloth shopping bags or recyclable containers will be offered instead, and profits will be funded towards waste disposal. This law was introduced with the intention of putting an end to non-biodegradable garbage in the world, as well as to manage and preserve natural resources and recyclable waste.[25][26] This move was also the result of a revised law on the conservation of resources, and reuse of recyclable waste. The law was implemented following a plastic waste handling crisis after China banned the importation of plastic garbage. This waste crisis caused South Korea's recycling firms to stop collecting garbage due to the financial loss incurred from the decrease in plastic prices. This resulted in plastic waste being left on the streets for weeks. The South Korean government was forced to come up with more sustainable ways to manage plastic waste instead of shipping it overseas. e-waste (or electronic waste) includes electrical or electronic devices or waste. Managing e-waste or waste electrical and electronic equipment (WEEE) is a major concern due to the magnitude of waste stream involved, as well as the toxic chemicals in the devices. Chemicals include barium, cadmium, chromium, lead, mercury, nickel, and bromated flame retardants. Discarded devices such as old computers, smartphones, and electrical appliances, may leak toxic chemicals if left in landfills.[18] Items such as batteries and cell phones require additional care in disposal. To prevent leakage, the Seoul city government has partnered with SR Center to collect e-waste. Seoul city discards 10 tons of e-waste annually, with only a fifth of e-waste ending up in the special recycling centre. Devices are taken apart at the special recycling centre, where valuable metals such as gold, copper, or rare resources can be extracted.[27] Many parts of the world are researching on feasible and environmentally friendly ways to dispose of e-waste for the WEEE management system. Recycling processes have been established in several countries, but the WEEE waste management system has not been introduced in most countries. In response to the growing concern of electronic waste, ‘the Act on Resource Recycling of Waste Electrical Electronic Equipment (WEEE) and End-of-life Vehicles’ was introduced in 2007. This act is aimed at reducing the amount of e-waste ending up in landfills and incinerators, and improving the performance and lifespan of such electronic devices.[28] The Waste Management Law was first introduced in 1986. It provided a framework that waste management was not only about containment, but reducing waste as well. Since its introduction in 1986, there is more practice of systematic and integrated Waste management in South Korea. The South Korean government also funded projects to promote this method of waste management.[1] It covered all waste streams, from municipal solid waste to manure, construction/demolition waste, and infectious waste. In 1991, the Act on Treatment of Livestock Manure, Wastewater and Sewage to manage manure waste separately. In 1992, the Act on Resource Saving and Recycling Promotion was enacted to consider waste as a resource. Based on this act, the volume-based waste fee system was implemented with a pay as you throw concept which provided legal support for those who resided near waste disposal sites, with the NIMBY (not in my back yard) issues.[2] South Korea is working its way to becoming a zero-waste society, aiming to achieve a 3% landfill rate and 87% recycle rate by 2020. This ratification is set to be extended to the year 2025 due to conflicts and setbacks between stakeholders.[1] The South Korean Ministry of Environment (MOE) promoted a waste-to-energy policy to boost South Korea's self-sufficiency rate. The policy aims to reduce the cost of waste disposal through incineration and landfills.[29] To generate electricity, fuel, and heating, waste gas, wood scraps, household waste, and other wastes are used in the conversion to energy. Energy production through waste is 10% cheaper than solar power, and 66% cheaper than wind power. This proves to be the most effective way of producing energy. In 2012, only 3.18% of new and renewable energy was produced, but the South Korean government hopes to increase the percentage to 20% by 2050.[30] China has been the dumping ground for the world's plastic for the longest time. In the 1990s, China saw discarded plastic as profitable, and the Chinese recreated the plastic into smaller, exportable bits and pieces. It was also cheaper for countries to export their plastic to China than discard it themselves. In November 2017, China stopped accepting contaminated plastic. This rejected plastic becomes absorbed by neighbouring countries like Thailand, Vietnam, the Philippines, and South Korea.[31] Now, Southeast Asian countries are starting to reject this waste as well. In August 2018, Vietnam introduced strict restrictions on plastic scrap imports. Thailand followed suit, announcing a ban on electronic parts. In October 2018, Malaysia also announced a ban on imports of plastic scraps.[32] In early January 2019, the Philippines rejected 1,200 tons of South Korean waste deemed non-recyclable. It was shipped back to South Korea in 51 trash-filled containers. In addition, 5,100 tons of South Korean waste had been found to be imported illegally by the Philippines. This waste included batteries, bulbs, used dextrose tubes, electronic equipment and nappies. South Korea and the Philippines are in talks about how the waste should be repatriated. Rogers CommunicationsRogers Communications Inc. is a Canadian communications and media company. It operates primarily in the fields of wireless communications, cable television, telephony and Internet connectivity, with significant additional telecommunications and mass media assets. Rogers has its headquarters in Toronto, Ontario.[4] The company traces its origins to 1960, when Ted Rogers and a partner bought the CHFI-FM radio station;[5] they then became part-owners of a group that established the CFTO television station.[6] The chief competitor to Rogers is Bell Canada, which has a similarly extensive portfolio of radio and television media assets, as well as wireless, television distribution, and telephone services, particularly in Eastern and Central Canada. The two companies are often seen as having a duopoly on communications services in their regions, and both companies own a stake of Maple Leaf Sports and Entertainment. However, Rogers also competes nationally with Telus for wireless services, and primarily indirectly with Shaw Communications for television service. In 1925, Edward S. Rogers Sr. invented the world's first alternating current (AC) heater filament cathode for a radio tube, which then enabled radios to be powered by ordinary transformer-coupled household electric current.[5] This was a breakthrough in the technology and became a key factor in popularizing radio reception. He also established the CFRB radio station in Toronto (later acquired by outside interests). In 1931, he was awarded an experimental television licence in Canada. On May 6, 1939, he was working on radar when he died suddenly due to complications of a hemorrhage, at the age of 38. He left a widow, Velma, and a five-year-old son, Edward (known as Ted). While his business interests were subsequently sold, his son later determined to carry on his father's legacy.[5] In 1960, Ted Rogers and broadcaster Joel Aldred[7] raised money to found Aldred-Rogers Broadcasting in order to purchase CHFI, an FM radio station in Toronto.[8] Aldred-Rogers Broadcasting also became a part-owner of Baton Aldred Rogers Broadcasting (BARB), which established CFTO-TV, Toronto's first private television station.[9][10] In 1964, Rogers established CFTR, an AM radio station. In 1967, Rogers established Rogers Cable TV in partnership with BARB. In 1971, new CRTC regulations forced BARB to sell its 50% stake in Rogers Cable TV. In 1979, Rogers acquired Canadian Cablesystems, and became listed on the Toronto Stock Exchange as a result. In 1980, Rogers acquired Premier Cablevision and became the largest cable company in Canada. In 1986, Rogers Cable was renamed Rogers Communications; it established operational control over Cantel, a wireless telephone company in which Rogers had a stake. Rogers Communications Inc. unveiled its new logo on January 17, 2000, marking the departure of its original logo.[11] In 2000, Rogers acquired Cable Atlantic[12] from Newfoundland businessman Danny Williams. In July 2001, Rogers Media acquired CTV Sportsnet, which was renamed as Rogers Sportsnet that November. The FAN 590 sports radio station joined Rogers Media in August 2001, along with 14 Northern Ontario radio stations.[13] In fall 2004, several strategic transactions were executed that significantly increased Rogers exposure to the potential of the Canadian wireless market. Rogers acquired the 34% of Rogers Wireless owned by AT&T Wireless Services Inc. for $1.77 billion.[14] On December 2, 2008, Ted Rogers died of heart failure.[15] In 2012, Rogers Cable filed a complaint in an Ontario court against penalties levied under a 'Truth in Advertising' law, claiming that the amount of the penalties, and the requirements imposed by the law, were in violation of the Charter of Rights and Freedoms.[16] The company also had to recognize the rising market trend of customers canceling or foregoing cable television service subscriptions in favour of cheaper alternate content delivery means, such as streaming media services like Netflix, a demographic called "cord cutters" and "cord nevers." In response, Rogers acquired content with a speculated cost of $100 million to begin their own competing online streaming service, Shomi, much like the American Hulu Plus,[17] which launched November 4, 2014. Shomi subsequently shut down after only 2 years of operation, on November 30, 2016.[18] In the summer of 2014, Rogers reported a 24% drop in profit compared to the previous year's second quarter.[19] Rogers Communications is traded on the Toronto Stock Exchange and on the New York Stock Exchange under ticker "RCI". Following the death of Ted Rogers in 2008, control of Rogers Communications passed to the Rogers Control Trust, a trust for which a subsidiary of Scotiabank serves as trustee. Ted's son Edward Rogers III and daughter Melinda Rogers serve, respectively, as Chair and Vice-Chair of the trust.[20][21] As of October 2018[update], members of the board of directors of Rogers Communications are:[2] As of October 2018[update], senior corporate officers of Rogers Communications are:[22] Assets and divisions of Rogers Communications includes: Prior to 2019, Rogers Publishing Limited published more than 70 consumer magazines and trade and professional publications, digital properties and directories in Canada, including Maclean's, Canada's weekly newsmagazine; its French-language equivalent, L'actualité; Sportsnet Magazine; Chatelaine; Flare; and a variety of other magazines and their companion web sites. The publishing arm was once part of the Maclean-Hunter Publishing empire.[citation needed] Unlike Maclean-Hunter, Rogers does not have printing facilities and has contracted out services (in 2008 contracted to Montreal-based TC Transcontinental to print magazines from their plants across Canada.[23]) On June 28, 2007, Rogers offered to sell the two religious-licensed OMNI stations in Winnipeg and Vancouver as part of the Citytv deal, although the company stated that it intended to retain the multilingual-licensed OMNI stations.[24] On July 7, Rogers also announced a takeover offer for Vancouver's multicultural station CHNM. In September 2007, Rogers has also applied to the CRTC to acquire 20 per cent of CablePulse 24, a local news channel in Toronto which was previously paired with Citytv (both stations were previously owned by CHUM Limited) but was retained by CTVglobemedia in the June 8 licence approval.[25] Neither CTVglobemedia nor Rogers has, to date, announced whether this application will change future plans for the station. In 2012, Rogers purchased CJNT-DT Montreal[26] and on February 3, 2013, it was rebranded as City Montreal. In March 2019, Rogers sold their magazine brands, including Maclean's, Chatelaine and HELLO! Canada, to St. Joseph Communications for an undisclosed sum.[27] In addition to its ownership of Sportsnet, acquired from CTV, Sportsnet One and Sportsnet World, Rogers Media operates the Toronto Blue Jays baseball team through Rogers Blue Jays Baseball Partnership and the Rogers Centre (previously known as SkyDome). Through Sportsnet, Rogers Media also holds a 50% ownership in Dome Productions, a mobile production and distribution joint venture that is a leader in high-definition television production and broadcasting in Canada. Rogers also owns the naming rights to Rogers Arena, home of the Vancouver Canucks.,[28] as well as Rogers Place, the home of the Edmonton Oilers.[29] On August 25, 2012, Rogers Media agreed to acquire Score Media which includes The Score Television Network for $167 million, including a 10% stake of its digital business. The deal was completed on Oct. 19, 2012.[30][31] Canada Inc., a joint venture between Rogers Communications and Bell Canada Owns 75% of Maple Leaf Sports & Entertainment, owns the Toronto Maple Leafs of the National Hockey League, Toronto Raptors of the National Basketball Association, Toronto Argonauts of the Canadian Football League, and Toronto FC of Major League Soccer, as well as their minor league farm teams, the Toronto Marlies of the American Hockey League (AHL), Raptors 905 of the NBA G League and Toronto FC II of the USL League One, respectively. On November 26, 2013, Rogers Communications Inc, unveiled the details of a 12-year, C$5.2 billion partnership with the National Hockey League which began in the 2014–15 season. This will give Rogers the controlling stake for national broadcast and digital rights of the NHL and will ultimately give them the ability to stream all NHL feeds on all of their current platforms replacing both Bell Media and CBC Sports as the national broadcast and cable television rightsholders respectively. The effects of this deal will shift the balance of power in the country's broadcast industry as it will drive demand for Rogers Cable TV subscriptions. This transaction marks the first time a first-class North American-wide sports league has allowed all its national right to one company on a long-term basis.[32][33] As part of the deal, Rogers also took over Canadian distribution of the NHL Centre Ice and GameCentre Live services. National English-language coverage of the NHL is carried primarily by Rogers' Sportsnet group of specialty channels; Sportsnet holds an exclusive window for games played on Wednesday nights. Hockey Night in Canada was maintained and expanded under the deal, airing up to seven games nationally on Saturday nights throughout the regular season across CBC Television, the Sportsnet networks, Rogers-owned television network Citytv, and FX Canada. While CBC maintains Rogers-produced NHL coverage during the regular season and playoffs through a time-brokerage agreement with the company, Rogers assumes editorial control and the ownership of any advertising revenue from the telecasts.[34] Citytv (and later Sportsnet) also airs a Sunday night game of the week, Rogers Hometown Hockey, which features a pre-game show originating from various Canadian communities. Sportsnet's networks also air occasional games involving all-U.S. matchups.[35][36][37][38][39][40] Under a sub-licensing agreement with Rogers, Quebecor Media holds national French-language rights to the NHL, with all coverage airing on its specialty channel TVA Sports. TVA Sports' flagship broadcasts on Saturday nights focus primarily on the Montreal Canadiens.[41][42] Rogers sought to increase the prominence of NHL content on digital platforms by re-launching the NHL's digital out-of-market sports package GameCentre Live as Rogers NHL GameCentre Live, adding the ability to stream all of Rogers' national NHL telecasts, along with in-market streaming of regional games for teams whose regional rights are held by Sportsnet.[43] GamePlus—an additional mode featuring alternate camera angles intended for a second screen experience, such as angles focusing on certain players, net and referee cameras, and a Skycam in selected venues, was also added exclusively for GameCentre Live subscribers who are subscribed to Rogers' cable, internet, or wireless services.[44][45] In the lead-up to the 2014-15 season, Rogers began to promote its networks as the new home of the NHL through a multi-platform advertising campaign; the campaign featured advertising and cross-promotions across Rogers' properties, such as The Shopping Channel, which began to feature presentations of NHL merchandise, and its parenting magazine Today's Parent, which began to feature hockey-themed stories in its issues.[46] On May 28, 2014, Rogers announced a six-year sponsorship deal with Scotiabank, which saw the bank become the title sponsor for Wednesday Night Hockey and Hockey Day in Canada, and become a sponsor for other segments and initiatives throughout Rogers' NHL coverage.[47] On October 6, 2014, Rogers and NHL began their media sales venture in which Rogers will lead all Canadian national NHL media sales across its owned and operated broadcast and digital platforms as well as ad sales for League-owned digital assets in Canada.[48] In 2011, a partnership was formed between Rogers Communications and Yodle, Inc to provide a suite of digital marketing services to Canadian small, medium, and enterprise size business.[49][50][51][52][53] These solutions have been deployed under the name OutRank by Rogers and operate as a business unit within the company. Services include search engine optimization, mobile marketing, social media marketing, pay per click, and analytics.[54][55][56][57] The opening was announced in January 2012 with the launch of their first client, Ontario-based CLS Roofing.[58] OutRank by Rogers is a Google Premier SMB Partner and promotes responsive web design.[59][60] The company is a donor to the Ronald Mcdonald House of Toronto.[61] In 2008, Rogers Communications launched Zoocasa, an online real estate listing service. The company later became a licensed real estate brokerage and in May 2013, the website relaunched to allow homebuyers to find properties and agents.[62] The service also provided rebates on real estate commissions to buyers and sellers. Zoocasa was shut down on June 22, 2015. The website's domain and technology were purchased for $350,000 and the website relaunched on July 2, 2015 under new ownership.[63] Texture (previously known as Next Issue) was a digital magazine app introduced to the Canadian market by Rogers in 2013.[64] The service had a monthly subscription fee that gave readers access to over 200 magazines in English and French.[65] Texture was purchased by Apple in 2018 and the service discontinued in 2019 in favor of Apple News+. Rogers Bank (French: Banque Rogers) is a Canadian financial services company wholly owned by Rogers Communications. Rogers applied to the Minister of Finance under the Bank Act for permission to establish a Schedule I bank (a domestic bank that may accept deposits) in summer 2011.[66] At launch, Rogers Bank offered a Rogers-branded credit card targeted at existing customers.[67] A companion card branded for Rogers subsidiary Fido was introduced in 2016.[68] The bank offers three categories of credit card to Canadians: Fido Mastercard,[69] Rogers Platinum Mastercard,[70] and Rogers World Elite Mastercard.[71] List of 2019 albums The following is a list of albums released in 2019. The albums should be notable which is defined as significant coverage from reliable sources that are independent of the subject. For additional information about bands formed, reformed, disbanded, or on hiatus, for deaths of musicians, and for links to musical awards, see 2019 in music. The Apprentice (British series 10) The tenth series of The Apprentice (UK), a British reality television series, was broadcast in the UK during 2014, from 14 October to 21 December on BBC One;[1] due to live coverage in Summer of that year for both the FIFA World Cup and the Commonwealth Games in Glasgow, the BBC postponed the series' broadcast until Autumn to avoid clashing with these.[2] It is the last series to feature Nick Hewer as Alan Sugar's aide, who left the programme following the series finale, with the tenth series featuring a guest appearance from Ricky Martin, winner of the eighth series, as an interviewer for the Interviews Stage for this series only. Production on the tenth series included two prominent tasks traditionally used in the show's format being specially designed towards celebrating The Apprentice's tenth year of broadcast. In addition, other tasks featured a more varied arrangement of challenges that included some being geared towards the technology industry. Alongside the standard twelve episodes, with the first two aired within a day of each other, the series featured two specials before its premiere – "Meet the Candidates", made available online only on 7 October; and "Ten Years of The Apprentice" on 13 October – and two specials aired alongside this series – "The Final Five" on 16 December ; and "Why I Fired Them" on 18 December Marking the programme's tenth series, production staff selected twenty candidates to take part,[3] the highest number to be involved in any variation of The Apprentice globally, with Mark Wright becoming the overall winner. Excluding specials, the series averaged around 7.40 million viewers during its broadcast. Applications for the tenth series began in Spring 2013, towards the end of the ninth series' broadcast, with the selection process of auditions, assessments and interviews held within mid-Summer of that year. As The Apprentice was now entering its tenth year, production staff and Alan Sugar discussed how to celebrate this milestone before filming would begin, opting on a few key decisions. One such decision was on the design of the tasks; apart from creating more variety in these than in the previous series – some focused on technology and another aimed at dividing teams to work both within the UK and abroad – two traditional tasks – the first sales task, and the bargain-hunting task – were designed around celebrating the programme's milestone, featuring the involvement of items that had been sold within these tasks. However, the more key decision agreed upon was on the number of candidates that would take part in the series. While the production staff selected sixteen candidates, as had been done in the past since the third series, they were kept unaware that a further four applicants had also been selected to take part, until filming for the series began. The decision to increase the number of candidates meant that Sugar was required to perform more multiple firings than before, allowing for the series to include a triple firing outside of the Interviews stage, the first time in the programme's history that this occurred, though reaction from fans was mixed over this decision during the series' broadcast. During filming, Nick Hewer began to contemplate his future on the programme, after finding the strain on his stamina becoming increasingly difficult to cope with from the amount of work he had do on and off-camera. Alongside other commitments, including his new role as host of Channel 4's Countdown, he eventually decided that the tenth series would be his last on The Apprentice, revealing his decision towards the end of the tenth series' broadcast, with it fully confirmed by Sugar on social media and the You're Fired half of the series finale.[4][5] Apart from Hewer, Margaret Mountford decided that, after working as an interviewer for the past four series, she would not be returning, leading to Sugar inviting Ricky Martin to interviewing candidates who made it to the Interviews stage. Prior to filming being completed and editing finalised, the BBC found that it could not place the tenth series in its Spring 2014 schedule because of live coverage of two major sporting events in that year – FIFA World Cup and the Commonwealth Games in Glasgow. As a result, it was forced to have episodes aired in Autumn to where it could have less competition for viewing figures, with Sugar confirming this decision during October 2013.[2] To accommodate the final edit of the tenth series, the premiere was preceded by a special, entitled "Ten Years of The Apprentice", which was focused on highlights from the past nine series of The Apprentice, mainly towards scenes that were memorable for Sugar, Hewer, and Karen Brady. In addition, this series saw the introduction of an online exclusive mini-episode, entitled "Meet the Candidates – using tapes from the selection process, the production staff invited comedian Matt Edmondson, a fan of The Apprentice who had been involved in online spin-offs for the programme, to star in a spoof online episode, in which he "interviewed" the candidates who had secured a place on the tenth series, usually deriving comedy from his responses to genuine answers and replies that each candidate had made to questions. When filming began, the first task saw the men name their team as Summit, while the women went under the team name of Tenacity after this task – their initial name of Decadence was not changed at any point during filming of this task when it was chosen, and was not edited out by production staff; reviewers for the first episode remarked that this choice was a "terrible" one to select for a team name, despite the reasons for its selection. Of those who took part, Mark Wright would become the eventual winner, going on to use his prize to start up an SEO business called Climb Online,[6] which would go on to establish an income of approximately £5 million a year.[7] Key: After facing a multitude of business tasks and a tough interview, the two finalists, aided by old friends, face the task of presenting their business proposal to an audience of business and industry experts, detailing key areas in it – its name, its goals, its target market, and its business structure. Bianca works to present her plans for a tights and hosiery business, but faces questions over her lack of manufacturing experience, product pricing and branding, despite her presentation being well received and her concept deemed good. Mark works to present his plans for a SEO business for supporting small companies, receiving praise for his strong industry expertise and brand name, yet faces questions over his proposal's target market and staff costs. Based on feedback from these presentations, Lord Sugar deems that Mark Wright will be his business partner for 2014 for his strong proposal he had expertise to create and being the stronger of the two finalists, leaving Bianca Miller to finish as runner-up due to the many concerns raised about her proposal. While the decision to have 20 candidates for this series made it the highest number of participates in an incarnation of the show worldwide, it drew criticism from viewers who remarked that there was less room to get to know the candidates, some of whom remained in the competition for some time yet had almost no screen time at all. The decision also received complaints that this led away from the business aspect of the show, becoming more of a "reality" programme by inducing "shock firings".[citation needed] Following the broadcast of the seventh episode, it was revealed that an energy drink in the United States already had the brand name of "Big Dawg", raising questions over why Summit had been allowed to use the name during the episode's task. The production staff later acknowledged that they had been aware of this fact, but had seen no issues for its use in this series of The Apprentice, for two reasons:- firstly, there was no British trademark of that name in use, at the time of filming; and secondly, the candidates couldn't have known of the existence of the brand, due to the show's rules prohibiting them from accessing the internet while taking part in the show.[21] Following the broadcast of the ninth episode, many viewers raised complaints about Lord Sugar's decision to revoke Tenacity's purchase of the paper-made skeleton, with the scene receiving similar reactions from the participants of Channel 4's Gogglebox, who were just as negative about Sugar's decision. Many of these complaints stated that the purchase had technically fitted the task's briefing because there had been no proper specification made about what kind of skeleton the teams had to purchase, deeming the decision to have been unfair and for being biased against Felipe Alvair-Baquero, the candidate who made the purchase.[22][23] Official episode viewing figures are from BARB.[24] Note: During the 2-hour final, the show was shared with The Apprentice: You're Hired, and as a result the figures are lower than expected. The first hour was the main show whereas the second hour was You're Hired. Original overnights for the final put the first hour at one million viewers more than the 2-hour average.[25] Specials North Korean defectors Since the division of Korea after the end of World War II and the end of the Korean War (1950–1953), North Koreans have defected for political, ideological, religious, economic or personal reasons. Such North Koreans are referred to as North Korean defectors. Alternative terms in South Korea include "northern refugees" (Korean: 탈북자, talbukja) and "new settlers" (새터민, saeteomin). During the North Korean famine of the 1990s, there was an increase in defections, reaching a peak in 1998 and 1999. Some main reasons for the falling number of defectors especially since 2000 are strict border patrols and inspections, forced deportations, and rising cost for defection. The most common strategy is to cross the border into Jilin and Liaoning provinces in northeast China before fleeing to a third country, due to China being a relatively close ally of North Korea. China, being the most influential of few economic partners of North Korea while the country has been under U.N. sanctions for decades, is also the largest and continuous aid source of the country. To avoid worsening the already tense relations with the Korean Peninsula, China refuses to grant North Korean defectors refugee status and considers them illegal economic migrants. About 76% to 84% of defectors interviewed in China or South Korea came from the Northeastern provinces bordering China. If the defectors are caught in China, they are repatriated back to North Korea where they often face harsh interrogations and years of punishment, or even death in political prison camps such as the Pukch'ang camp, or reeducation camps such as the Chungsan camp or Chongori camp. Different terms, official and unofficial, refer to North Korean refugees. On 9 January 2005, the South Korean Ministry of Unification announced the use of saeteomin (Korean: 새터민, "people of new land") instead of talbukja (탈북자, "people who fled the North"), a term about which North Korean officials expressed displeasure.[1] A newer term is bukhanitaljumin (Korean: 북한 이탈 주민; Hanja: 北韓離脫住民), which has the more forceful meaning of "residents who renounced North Korea".[2] North Korea expert Andrei Lankov has criticized the term "defectors", since most do not seek refuge because of political dissent but are instead motivated by material deprivation.[3] Since 1953, 100,000–300,000 North Koreans have defected, most of whom have fled to Russia or China.[4] 1,418 were registered as arriving in South Korea in 2016.[5] In 2017, there were 31,093 defectors registered with the Unification Ministry in South Korea, 71% of whom were women.[6] In 2018, the numbers had been dramatically dropping since Kim Jong-un took power in 2011, trending towards less than a thousand per year, down from the peak of 2914 in 2009.[7] Professor Courtland Robinson of the Bloomberg School of Public Health at Johns Hopkins University estimated that in the past the total number of 6,824 and 7,829 children were born to North Korean women in the three Northeastern Provinces of China.[8] Recently, survey results conducted in 2013 by Johns Hopkins and the Korea Institute for National Unification (also known as KINU) showed that there were about 8,708 North Korean defectors and 15,675 North Korean children in China's same three Northeastern Provinces which are Jilin, Liaoning and Yanbian Korean Autonomous Prefecture. Based on a study of North Korean defectors, women make up the majority of defections. In 2002, they comprised 56% of defections to South Korea (1,138 people), and by 2011, the number had grown to 71% (2,706 people). More women leave the North because, as the bread-winners of the family, they are more likely to suffer financial hardships. This is due to the prevalence of women in service sector jobs whereas men are employed in the military—33% of defectors cited economic reasons as most important. Men, in contrast, had a higher tendency to leave the country due to political, ideological or surveillance pressure.[9] In the first half of 2018, women made up 88% of defectors to the South.[10] According to the State Department estimates, 30,000 to 50,000 out of a larger number of hiding North Koreans have the legal status of refugees.[11] Around 11,000 North Korean refugees remained in hiding in near the border with their home country.[12] These refugees are not typically considered to be members of the ethnic Korean community, and the Chinese census does not count them as such. Some North Korean refugees who are unable to obtain transport to South Korea marry ethnic Koreans in China and settle there; they blend into the community but are subject to deportation if discovered by the authorities. Those who have found "escape brokers" try to enter the South Korean consulate in Shenyang. In recent years, the Chinese government has tightened the security and increased the number of police outside the consulate. Today there are new ways of entering South Korea. One is to follow the route to the Mongolian border; another is the route to southeast Asian countries such as Thailand, who welcome the North Korean defectors.[13] During the mid-1990s, the percentages of male and female defectors were relatively balanced.[14] In early to mid-1990s, male labour was valuable since North Korean defectors could work in Chinese countrysides and factories and secure hideout in return.[14] However, due to rising social security issues including crime and violence involving North Koreans, the value of male labour decreased.[14] Females, on the other hand, were able to find easier means of settlement including performing smaller labour tasks and marrying Chinese locals (mostly ethnic Korean).[14] As of today, 80–90% of North Korean defectors residing in China are females who settled through de facto marriage; a large number of them experience forced marriage and human trafficking.[14][15] Before 2009, over 70% of female North Korean defectors were victims of human trafficking.[15] Due to their vulnerability as illegal migrants, they were sold for cheap prices, around 3,000 to 10,000 yuan.[15] Violent abuse started in apartments near the border with China, from which the women are then moved to cities further away to work as sex slaves. Chinese authorities arrest and repatriate these North Korean victims. North Korean authorities keep repatriates in penal-labour colonies (and/or execute them), execute the Chinese-fathered babies "to protect North Korean pure blood," and force abortions on pregnant repatriates who are not executed.[16][17] After 2009, the percentage of female North Korean defectors with experience of human trafficking decreased to 15% since large numbers of defectors began to enter South Korea through organized groups led by brokers.[15] However, the actual number may be larger considering that many female defectors tend to deny their experience of prostitution.[15] China refuses to grant refugee status to North Korean defectors and considers them illegal economic migrants. The Chinese authorities arrest and deport hundreds of defectors to North Korea, sometimes in mass immigration sweeps. Chinese citizens caught aiding defectors face fines and imprisonment. In the early to mid-1990s, the Chinese government was relatively tolerant with the issue of North Korean defectors.[18] Unless the North Korean government sent special requests, the Chinese government did not display serious control of the residence of North Koreans in Chinese territory.[18] However, along with intensified North Korean famine in the late 1990s, the number of defectors sharply increased, which raised international attention.[18] As a result, China stepped up the inspection of North Korean defectors and began their deportations.[18] In February 2012, Chinese authorities repatriated 19 North Korean defectors being held in Shenyang and five defectors in Changchun from the same location. The case of the 24 detainees, who have been held since early February, garnered international attention due to the North's reported harsh punishment of those who attempted to defect. China repatriates North Korean refugees under a deal made with North Korea, its ally. Human-rights activists say those repatriated face harsh punishment, including torture and imprisonment in labour camps.[19] South Korean human-rights activists are continuing to stage hunger strikes and appeal to the U.N. Human Rights Council to urge China to stop the deportation of the refugees.[20][21][22] Human-rights organizations have compiled a list of hundreds of North Korean defectors repatriated by China.[23][24] For some of them the fate after repatriation to North Korea ranges from torture, detention, prison camp to execution. The list includes humanitarian workers, who were assassinated or abducted by North Korean agents for helping refugees. There have been three cases of North Korean defectors who have escaped directly to Japan. In January 1987, a stolen boat carrying 13 North Koreans washed ashore in Fukui Port in Fukui Prefecture and then continued to South Korea via Taiwan.[25][26] In June 2007, after a six-day boat ride a family of four North Koreans was found by the Japan Coast Guard off the coast of Aomori Prefecture.[27] They later settled in South Korea.[28][29][30] In September 2011, the Japan Coast Guard found a wooden boat containing nine people, three men, three women and three boys. The group had been sailing for five days towards South Korea but had drifted towards the Noto Peninsula and thought they had arrived in South Korea. They were found in good health.[31] Japan resettled about 140 ethnic Koreans who managed to return to Japan after initially immigrating to North Korea under the 1959-1984 mass "repatriation" project of ethnic Koreans from Japan. This supposed humanitarian project, supported by Chongryon and conducted by the Japanese and North Korean Red Crosses, had involved the resettlement of around 90,000 volunteers (mostly from South Korea) in North Korea, which Chongryon hailed as a "paradise on earth".[32] Some of the Koreans who were repatriated, including Kim Hyon-hui, a student of Yaeko Taguchi, revealed evidence about the whereabouts of Japanese citizens who had been kidnapped by North Korea.[33] A much shorter route than the standard China-Laos-Thailand route is straight to Mongolia, whose government tries to maintain good relations with both North and South Korea but is sympathetic to North Korean refugees. North Korean refugees who are caught in Mongolia are sent to South Korea, effectively granting them a free air ticket.[34] However, using this route requires navigating the unforgiving terrain of the Gobi Desert. Also, tighter border control with China has made this route less common.[citation needed] The Philippines has in the past been used as a transit point for North Korean refugees, often arriving from China and then being sent on to South Korea.[35] There may also be an unknown number of North Korean refugees that have blended into the South Korean community in the Philippines.[36] The country has been hard to reach due to the fact refugees have to cross China and get on a boat to the island chain nation. A study by Kyung Hee University estimated that roughly 10,000 North Koreans live in the Russian Far East; many are escapees from North Korean work camps there.[37] Both South Korean diplomatic missions and local ethnic Koreans are reluctant to provide them with any assistance; it is believed that North Korea ordered the assassination of South Korean consul Choi Duk-gun in 1996 as well as two private citizens in 1995, in response to their contact with the refugees. As of 1999, there were estimated to be only between 100 and 500 North Korean refugees in the area.[38] In 2014, research by the human rights organisation the European Alliance for Human Rights in North Korea claims that there are around 1,400 North Korean refugees in Europe. Citing UNHRC statistics, the report identified North Korean communities in Belgium, Denmark, Finland, France, Germany, Luxembourg, the Netherlands, Norway, Sweden and the United Kingdom.[39] As of 2015, the largest North Korean community in Europe resides in New Malden, South West London. Approximately 600 North Koreans are believed to reside in the area,[40] which is already notable for its significant South Korean community.[41] According to a Eurostat report, a total of 820 North Koreans became of European Union countries in the 2007-2016 period, with nearly 90 percent of them living in Germany and Britain.[42] South Korea's Ministry of Unification is a government organization that is in charge of preparing for a future reunification between North and South Korea. It is responsible for North-South relations including economic trade, diplomacy, and communication, and education of reunification, which involves spreading awareness in schools and among the public sphere. The Ministry of Unification is thus the main organization that manages North Korean defectors in South Korean territory by establishing admission processes and resettlement policies. It also has regional sub-organs called Hana Centers that help defectors in their day-to-day life for a more smooth transition into South Korean society.[43] The number of defectors since the 1950–1953 Korean War is more than 26,000.[44] In 1962, the South Korean Government introduced the "Special law on the protection of defectors from the North" which, after revision in 1978, remained effective until 1993. According to the law, every defector was eligible for an aid package. After their arrival in the South, defectors would receive an allowance. The size of this allowance depended on the category to which the particular defector belonged (there were three such categories). The category was determined by the defector's political and intelligence value. Apart from this allowance, defectors who delivered especially valuable intelligence or equipment were given large additional rewards. Prior to 1997 the payments had been fixed in gold bullion, not in South Korean won—in attempts to counter ingrained distrust about the reliability of paper money.[13] The state provided some defectors with apartments, and all those who wished to study were granted the right to enter a university of their choice. Military personnel are allowed to continue their service in the South Korean military where they were given the same rank that they had held in the North Korean army. For a period of time after their arrival defectors were also provided with personal bodyguards.[13] In 2004, South Korea passed controversial new measures intended to slow the flow of asylum seekers as it has become worried that a growing number of North Koreans crossing the Amnok and Duman Rivers into China will soon seek refuge in the South. The regulations tighten defector screening processes and slash the amount of money given to each refugee from ₩28,000,000 ($24,180) to ₩10,000,000 ($8,636). South Korean officials say the new rules are intended to prevent ethnic Koreans living in China from entering the South, as well as stop North Koreans with criminal records from gaining entry.[45] Defectors past retirement age receive Basic Livelihood Benefits of about ₩450,000 per month, which covers basic necessities, but leaves them amongst the poorest of retirees.[46] North Korean refugees arriving in the South first face joint interrogation by authorities having jurisdiction including the National Intelligence Service and the National Police Agency to ensure that they are not spies.[47] They are then sent to Hanawon, a government resettlement center. There are also non-profit and non-governmental organizations that seek to make the sociocultural transition easier and more efficient for the refugees. One such organization, Saejowi, provides defectors with medical assistance as well as an education in diverse topics ranging from leadership and counseling techniques to sexual violence prevention and avoidance.[48] Another organization, PSCORE, runs education programs for refugees, providing weekly English classes and one-on-one tutoring.[49] Results of a survey conducted by the North Korean Refugees Foundation show that approximately 71% of North Koreans to have defected to South Korea since about 1998 are female.[44] The percentage of female defectors has risen from 56% in 2002 to a high of 85% in 2018.[9] As of February 2014, age demographic of North Korean defectors show that 4% were ages 0–9, 12% were ages 10–19, 58% were ages 20–39, 21% were ages 40–59, and 4% were over 60.[44] More than 50% of defectors come from North Hamgyong Province.[51] The employment status of defectors before leaving North Korea was 2% held administrative jobs, 3% were soldiers (all able-bodied persons are required to serve 7–10 years in the military), 38% were "workers", 48% were unemployed or being supported by someone else, 4% were "service", 1% worked in arts or sports, and 2% worked as "professionals".[44] According to a poll by the National Human Rights Commission of Korea, around 50% of defectors said they had experienced discrimination because of their background. The two major issues were their inability to afford medical care and poor working conditions. Many complained of disrespectful treatment by journalists.[52] According to the World Institute for North Korea Studies, a young female defector who does not attend university has little chance of making a living in the South.[53] With limited government-sponsored programs for migrants, North Koreans face vocational, medical, and educative difficulties assimilating in South Korea and rely on nongovernmental organizations. In addition to the traumatic circumstances of their homeland, North Koreans may face social exclusion.[54] In a survey of over 24,000 of North Koreans who migrated to South Korea between August and December 2012, 607 identified as suffering from depression, anxiety, or suicidal ideation.[55] Due to mistrust between both North and South Koreans, evidence from a study of 182 defectors reveal that defectors are unable to receive medical coverage from doctors.[56] Intergovernmental organizations such as the United Nations have repeatedly urged recipient nations of North Korean defectors to increase the efforts in identifying defectors who are at high risk for poor mental health and provide appropriate medical services and social support. Neither public nor private providers have been convinced to support due to identity politics. Identity politics play a monumental factor in the cultural division among North and South Koreans. Contrary to popular belief[by whom?], South Koreans and North Koreans share the same sense of nationalism and patriotism; however, most South Koreans harbour negative attitudes towards their Northern neighbors[citation needed]. In 2010, the Korean General Social Survey (KGSS) conducted face-to-face research of over 1,000 South Koreans on their perspectives on the ethnic identity of North Korean defectors assimilating into South Korea.[57] The results reveal that North and South Koreans are both in agreement about not supporting the reunification of the Koreans. This is because some South Koreans have grown suspicious of defectors and their true intentions of migrating. South Koreans' antagonism against North Korea is mainly targeted at its Communist regime and a strict division of national identity.[57] In comparison to North Koreans, South Koreans are more likely to harbor negative attitudes towards migrants and are less likely to believe in the reunification of the Koreas. The outcome from the KGSS survey rules that the idea of "one nation, two countries" does not exist anymore. Thailand is generally the final destination of North Koreans escaping through China. While North Koreans are not given refugee status and are officially classified as illegal immigrants, the Thai government will deport them to South Korea instead of back to North Korea. This is because South Korea recognizes native Koreans from the entire Korean Peninsula as citizens. These North Korean escapees are subject to imprisonment for illegal entry; however, most of these sentences are suspended.[58][59] Recognizing this, many North Koreans will in fact surrender themselves to the Thai police as soon as they cross the border into Thailand.[60] Although Laos was once seen as a safe haven for North Korean defectors, in 2013 nine defectors were arrested and sent back to North Korea causing international outrage partially because one of the defectors is the son of a Japanese abductee.[61][62][63][64] On 5 May 2006, unnamed North Koreans were granted refugee status by the United States, the first time the U.S. accepted refugees from there since President George W. Bush signed the North Korean Human Rights Act in October 2004. The group, which arrived from an unnamed Southeast Asian nation, included four women who said that they had been the victims of forced marriage. Since this first group of refugees, the U.S. has admitted approximately 170 North Korean refugees by 2014.[65] Between 2004 and 2011, the U.S. has admitted only 122 North Korean refugees and only 25 have received political asylum.[66] A number of North Koreans have entered illegally, estimated at about 200, and generally settle in the ethnic Korean community in Los Angeles.[67] An aunt and uncle of Kim Jong-un have lived in the United States since 1998.[68] Many defectors who reach China travel onwards to southeast Asia, especially Vietnam. The journey consists of crossing the Tumen River, either when frozen or shallow in summer, in camouflage, and then taking the train secretly across China. From there, they can either work illegally, though often exploited, or attempt to travel to South Korea.[69][70] Though Vietnam maintains diplomatic relations with North Korea, growing South Korean investment in Vietnam has prompted Hanoi to quietly permit the transit of North Korean refugees to Seoul. The increased South Korean presence in the country also proved a magnet for defectors; four of the biggest defector safehouses in Vietnam were run by South Korean expatriates, and many defectors indicated that they chose to try to cross the border from China into Vietnam precisely because they had heard about such safehouses.[71] In July 2004, 468 North Korean refugees were airlifted to South Korea in the single largest mass defection; Vietnam initially tried to keep their role in the airlift secret, and in advance of the deal, even anonymous sources in the South Korean government would only tell reporters that the defectors came from "an unidentified Asian country".[72] Following the airlift, Vietnam tightened border controls and deported several safehouse operators.[71] On 25 June 2012, a South Korean activist surnamed Yoo was arrested for helping the North Korean defectors to escape.[73][74][75] North Korean asylum seekers and defectors have been rising in numbers in Canada since 2006.[76] Radio Free Asia reports that in 2007 alone, over 100 asylum applications were submitted, and that North Korean refugees have come from China or elsewhere with the help of Canadian missionaries and NGOs. The rapid increase in asylum applications to Canada is due to the limited options, especially when receiving asylum is becoming more difficult. On 2 February 2011, Former Prime Minister Stephen Harper met Hye Sook Kim, a North Korean defector and also received advice from Dr. Norbert Vollertsen, "Canada can persuade China, among others, not to repatriate the North Korean refugees back to North Korea but, instead, let them go to South Korea and other countries, including Canada."[77] North Korean defectors experience serious difficulties connected to psychological and cultural adjustment once they have been resettled. This occurs mainly because of the conditions and environment that North Koreans lived in while in their own country, as well as inability to fully comprehend new culture, rules, and ways of living in South Korea.[78] Difficulties in adjustment often come in the form of post-traumatic stress disorder (PTSD), which is essentially a mental disorder that develops after a person has experienced a major traumatic event. In the case of North Koreans, such traumatic events and experiences include brutality of the regime, starvation, ideological pressure, propaganda, political punishments, and so on.[79] Some studies have found the direct connection between physical illness and PTSD. PTSD serves as an explanation of the link between the exposure to trauma and physical health: exposure to trauma leads to worsening of the physical health condition because of PTSD.[80] PTSD-related symptoms include disturbing memories or dreams relate to the traumatic events, anxiety, mental or physical distress, alterations in the ways of thinking.[81] Depression and somatization are two of the conventional forms of PTSD, both of which are diagnosed among North Korean defectors with females having larger statistic numbers of the disorder diagnoses.[82] According to a recent survey, about 56% of the North Korean defectors are influenced by one or more types of psychological disorders.[83] 93% of surveyed North Korean defectors identify food and water shortages and no access to medical care and, thus, constant illness as the most common types of their traumatic experiences preceding PTSD.[83] Such traumatic experiences greatly influence the ways North Korean defectors adjust in new places. PTSD often prevents defectors from adequately assimilating into a new culture as well as from being able to hold jobs and accumulate material resources.[84] Traumatic events are not the only reason why North Koreans experience difficulty adjusting to the new way of living. Woo Teak-jeon conducted interviews with 32 North Korean defectors living in South Korea and found that other adjustment difficulties that are not related to PTSD occur due to such factors as the defector's suspiciousness, their way of thinking, prejudice of the new society, and unfamiliar sets of values.[78] In many instances, North Korean defectors seem to be unable to easily adjust to the new way of living even when it comes to nutrition. According to research conducted by The Korean Nutrition Society, North Koreans used to consuming only small portions of food in North Korea daily, continue to exercise the same type of habits even when given an abundance of food and provision.[85] Psychological and cultural adjustment of North Koreans to the new norms and rules is a sensitive issue, but it has some ways of resolution. According to Yoon, collective effort of the defectors themselves, the government, NGOs, and humanitarian and religious organizations can help make the adjustment process smoother and less painful.[86] The non-profit NGO Teach North Korean Refugees (TNKR), has received positive recognition for aiding refugees' adjustment to life outside of North Korea.[87][88] According to their website, TNKR's mission is to empower North Korean refugees to find their own voice and path through education, advocacy, and support.[89] Their primary focus is to assist North Korean refugees in preparing for their future and transitioning to life outside of North Korea by providing free English learning opportunities. TNKR also hosts bi-annual English public speaking contests for North Korean refugees[90] and holds public forums that offer first-hand accounts of life in, escape from, and adjustment outside of North Korea.[91] TNKR was founded in 2013 by Casey Lartigue Jr. and Eunkoo Lee, who currently co-direct the organization. Lartigue Jr. and Lee gave a joint TEDx Talk in 2017 that tells the history of TNKR and offers practical lessons for making the world a better place.[92] In some cases, defectors voluntarily return to North Korea. Exact numbers are unknown;[93] however, in 2013, their number was reported[by whom?] to be increasing. Double defectors either take a route through third countries such as China, or may defect directly from South Korea.[94] In 2014, the Unification Ministry of South Korea said it only had records of 13 double defections, three of whom defected to South Korea again.[95] However, the total number is thought to be higher. A former South Korean MP estimated that in 2012 about 100 defectors returned to North Korea via China.[95] In 2015, it was reported that about 700[96] defectors living in South Korea are unaccounted for and have possibly fled to China or Southeast Asia in hopes of returning to North Korea.[94] In one case, a double defector re-entered North Korea four times.[93] North Korea under Kim Jong-un has allegedly started a campaign to attract defectors to return with promises of money, housing and employment. According to unconfirmed reports, government operatives have contacted defectors living in South Korea and offered them guarantees that their families are safe, 50 million South Korean Won ($44,000),[93] and a public appearance on TV.[95] It was reported in 2013 that North Korea had aired at least 13 such appearances on TV where returning defectors complain about poor living conditions in the South and pledge allegiance to Kim Jong-un.[95][97] In November 2016, North Korean website Uriminzokkiri aired an interview with three double defectors who complained that they had been treated as second-class citizens.[53] In 2013, a re-defector was charged by South Korea upon return.[98] In 2016, defector Kim Ryon-hui's request to return to North Korea was denied by the South Korean government.[99][100] In June 2017, Chun Hye-sung, a defector who had been a guest on several South Korean TV shows using the name Lim Ji-hyun, returned to the North. On North Korean TV, she said that she had been ill-treated and pressured into fabricating stories detrimental to North Korea.[53] In July 2017, a man who had defected to the South and then returned to the North was arrested under the National Security Act when he entered the South again.[101] In 2018 a defector was arrested in South Korea and charged with sending rice to North Korean secret police prior to an attempted return.[citation needed] In 2019, South Korea deported two North Korean fishermen who tried to defect, saying that an investigation had found the men had killed 16 of their crewmates.[102] Websites Articles Media finding cheap packages for tour to europeCheap Europe tour packages include the travel tickets, living arrangements, sight-seeing, food,�while when one searches for the tour packages offline, only limited number of agencies can be visited.Europe is the most preferred destination for many of the people when they have to go for trips either for honeymoon or family trips. Europe is a beautiful continent and there are many world heritage sites that make it traditionally and culturally rich continent. It has much to share with the visitors from all over the world including culture, history and cuisine. However, Europe trips were priced really high in the past and many people only dreamt of going to this continent. This is the time to make their dreams come true. Many travel agencies are offering the tour packages at very nominal rates. There are innumerable travel agencies and all are offering cheap rates because of the high competition among them.�Cheap Europe tour packages include the travel tickets, living arrangements, sight-seeing, food, various other arrangements and guides to take people to the known places of the continent. These packages also include transportation facility to wander at the places where one wants to. All the arrangements are made by the agencies after some specified initial payment is made to them. One doesn�t have to bother about anything after that and one can enjoy a tension free journey. There are many kinds of packages available. One can choose the best plan that suits the requirements. Some travel agencies also provide customized tours where people have the opportunity to select only particular favorite places within the continent. These types of flexible trips can help those people who can�t afford many days for a trip. They can select the most beautiful places in Europe and visit only those places for their tour. The various beautiful places which one can select for limited tour are Tuscany, Lauterbrunnen, The Greek Islands, Rome, Venice, Eiffel Tower, London Bridge, River Athens, Communication Palace, Einstein Caf�, and Hamlet�s Castle.�Before selecting any package, there are some points to be kept in mind. First of all, one should have a fair idea about what amount of money one wants to spend on the whole tour including tickets, guide, and accommodation. Secondly, tell the travel agencies about the number of persons going for the trip. Travel agents, on the basis of the estimated budget and number of persons, can suggest the various tour packages available within budget. Always confirm from the travel agents if any hidden costs are there so that future problems could be avoided. Now-a-days, internet has also become a good source to find out the packages available within the estimated budget. Internet gives an opportunity to compare the packages with different travel agencies;�while when one searches for the tour packages offline, only limited number of agencies can be visited.�In conclusion, many Europe tour packages are available now-a-days with all the travel agencies. However, a clear understanding of the package and pre planning of the budget is a must before selecting a suitable package. After a package is chosen, one can enjoy the tour as all the arrangements are made by the travel agencies.cheap vacations packages to singapore guarantees an exhilarating vacationMatchless blend of mesmerizing natural sceneries, superb hospitality, great value accommodation, fabulous shopping and delectable dining and a vibrant nightlife make Singapore simply irresistible. ...Matchless blend of mesmerizing natural sceneries, superb hospitality, great value accommodation, fabulous shopping and delectable dining and a vibrant nightlife make Singapore simply irresistible. The island country is one of the tourist hotspot that lures vacationers from all parts of the globe from all walks of life. The numerous adventurous sporting and leisure opportunities create opportunities for truly memorable holidays with friends family or solo. Cheap vacations packages to Singapore are the best and easiest way to go on your vacations. Deals on holiday package to the island city are easily available on the internet with various reputed web travel portals at competitive lucrative prices. The country being on the list of all travelers and vacationers, and enjoying a tropical rainforest climate, any time of the year is good time to fly for a relaxing and fun filled holiday. Vacation package deals offered comprise of flight fares, hotel bookings, car rental leases and often passes for events, theme parks, attractions and facilities of spa, massage and local delights.Singapore is known as the Soul of Asia as is sited at the heart of Southeast Asia, where the ancient and contemporary world blends uniquely. Exhilarating events, irrepressible energy and excitement that mesmerizes all visitors as the nation is suffused with energy, color, bliss and charm. Known as Merlion City is sure to captivate your heart, mind and soul. It has rare delights such as wonderful shopping and dining experiences and adventure sports that thrill and affluent vibrant and contrasting blend of multi-ethnic of cultures, delectable cuisines, majestic art and structures, Singapore is a favorite tourist hotspot for vacationers as it offers the best of delights on holiday. The vivacious tourist charms of the city are Paradise Beach also known as Santosa, a favorite hangout for visitors. Tourists can also head to several amazing attractions such as the Giant Merlion, Singapore Zoo, Singapore Discovery Center, Jurong Bird Park (Largest Bird Park in Asia), Singapore Botanical Gardens, Sungei Buloh Wetland Reserve, Singapore Science Center, Chinese and Japanese Garden, Singapore Flyer and Asian Civilizations Museum. Enjoy the unique night safari and the numerous water sports on beaches and take a peak on the life of seas and oceans. Book with Cheap travel packages to Singapore and enjoy shopping experience at the Shopper�s paradise blessed with abundance of malls, markets, low tariffs and tax rates on imports. Part of the fun is the excellent buys and great variety of shops. Venture out for sports such as golfing, surfing, scuba diving, icing skating and snowing skiing as one gets unlimited choices. Relax and enjoy spa and massage treatments or be a spectator at the Formula 1 racing at the Singapore Polo Club. Different communities call the island home so some of the finest dining experiences await visitors. Take home unique experiences of memorable holidays of multi ethnic cultures. Festivals in Singapore are delight to see, witness and savor. enthralled by its attractions vacationers return year after year and every traveler has the island on its wish list.bonvoyage1000 cheap travel packagesThis article is all about BonVoyage1000 team leader, bonvoyage1000 cheap travel packages and also this article tells more information about the cheap travel packages.The Bon Voyage 1000 is company started by three persons. They are Mike Larikan, Steve Abraham and Steve Sughrim. This company is engaged in the business of travel. It is not an ordinary type of business; it is a Multi Level Marketing business. In recent times there has been many Multi Level Marketing business. The businesses like Amway, ACN, Mary Kay, Avon and many more have been in the market for quite some time now and heavy competition exists now. In this business a person can earn money from their own houses only by using internet. They can do all the deals from home by the use of internet. �Many businesses are not 100 percent trust worthy. This is because many companies cheat their customers. Some companies run away over night. The customers of those businesses get cheated by trusting them. They lose lots and lots of money which they had invested in them. The Bon Voyage 1000 company offers a wide range of services to its members. Mainly the offer best travel packages to their members. The only thing is that one has to become a member in the company. There is a specified amount of money for investment in this company. Only by paying it one can become a member. One time payment is enough to become a member. But by just becoming a member does not reap you more money, sufficient hard work is needed. This earns you more money at a very fast rate. �Many multi level marketing systems like this do make you earn quick money but they don't last for more years. Within a short time period it goes out of trend. So first one must get prepared to invest in these types of businesses. No one can predict about the life span of this business. But this company offers more travel package benefits. Once you have become a member of this company you will be able to travel to almost all locations in this world by the help of this company. The person receives a lifestyle card for becoming a member. And for joining as a member the person is offered rewards like cruise, condo and also unique deals which are at a very cheap rate. This is a direct advantage of becoming a member. The condo can be availed at cost as cheap as $100 a week. �This brings a dual benefit of leisure travelling and business travelling and also can save more money on booking trips around the world. The lifetime memberships can also be availed here. The cost of the lifetime memberships differ between $1000 to $3000. It also offers a compensation plan known as �Lifestyle Rewards Program ". This plan is offered only to members who want to do business by recruiting members. A person can earn up to a 1000 dollars for the completion of a cycle. For completing a cycle a person must gather two persons under him as a member and those persons in turn must gather two persons each under them.disneylands discount packages for the militaryDisneyland provides Discount Packages and cheap tickets for the Military. Disneyland salutes the army-men, both active and retired, and they can receive special treatment in form of cheap tickets and discount packages Although every year, Disney makes some changing in it’s policies regarding discount and cheap packages. For 2014-2015, Disney offered four-day theme park ticket in Orlando, there was a Water-Park option for Florida and three-day park hopper tickets for California. Be Careful about the Dates: After the date has passed, the offer expires and no matter how much of a soldier you are, you just cannot attain discounts and cheap tickets. So you have to be really careful about the dates. Visit the ParkSavers Company website. They both provide really useful information regarding special discounts. The discounts are available for retired and active duty army-men and even for the veterans. Special Discounts: For the military officers, now-a-days, Disneyland is providing a special discount of 20% on the room reservations. But then again, you have to be really careful about the dates and deadlines of the offer. Discounts on resorts: Following are the discounts that military officials get on resorts:-30% off Value Resort.Value Resorts include themes like: Art of Animation, All-Star Movies, Pop Century, Fantasia, All-Star Music and All-Star Sports.-35% off ModeratesThe Moderates include: Conorado Springs, Disney-All Star, Caribbean Beach Resort, Disney's Port Orleans Resort - Riverside etc.-40% off Deluxe ResortsThe Deluxe Resorts have themes such as: Polynesian Village Resort, Animal Kingdom Lodge, Wilderness Lodge, Grand Floridian, Yacht Club Resort, Contemporary Resort and Broad-walk Inn.While making the reservations the military men are also allowed to add any one of the dining package from the following three dinner packages:-Dining-Quick Service Dining-Deluxe Dining Shade of Green: Active duty, retired military as well as honorably discharged veterans, can enjoy the luxuries of Shade Of Green (SOG) resort. It is an on-property resort, though not fully operated and controlled by Disney, but somehow it comes under Disney’s wing, it was completely renovated and was expanded in 2005. Following are the people that qualify for enjoying the luxuries of SOG:stay healthy while flyingOne key to enjoying your cheap fares, whether cheap airplane tickets or cheap vacation packages, is to protect yourself against illness.��great 2012 budget destinationsCheap travel options, whether cheap airplane tickets, discount hotel rooms, cheap auto rentals, or cheap vacation packages remain readily available to a number of destinations.� expedited screening for military personnelFew things are more appreciated by travelers than finding cheap travel options, whether cheap airplane tickets or cheap vacation packages.� tips on finding cheap faresMost people look for cheap airplane tickets and discount hotel rooms or cheap vacation packages prior to finalizing their travel plans.passengers rights in the airFrequent travelers are often more confident regarding how to find cheap travel options, be they cheap airplane tickets, discount hotel rooms, cheap vacation packages, or cheap auto rentals, than they are when it comes to understanding their legal rights as fliers on commercial aircraft.cheap and best holiday packages in india for budget oriented peopleSearching cheap and best holiday packages in India is not a difficult job as there are various destinations which is quite popular as a budget travel. There is no dearth of options and you can select right from beaches, historical attractions, hill stations, backwaters and etc to make your visit memorable.Searching cheap and best holiday packages in India is not a difficult job as there are various destinations which is quite popular as a budget travel. There is no dearth of options and you can select right from beaches, historical attractions, hill stations, backwaters and etc to make your visit memorable.The Hill StationsSummer is a great time to explore the Hill Stations Of India as the climates of these areas are cool and gives perfect ambience to escape from the scorching heat of summer. Amongst these cheap hill stations- Ooty, Manali, Shimla, Darjeeling, Mussorie are famous. There are various packages are offered for these and can be extended depending on your needs and budgets. All the hill stations of India are located very close to Delhi and gives good connectivity by road, rail and air.Monsoon Tour Of KeralaEverybody likes rain as it brings happiness and considered as a theme of romantic ideas and if it is celebrated in Kerala then what you would say. The monsoon season in Kerala is great and people across the globe come to feel the unique experience. In Kerala, the monsoon plays an important role to enhance the beauty and cultivation of the state. Must visit Kerala during the Monsoon as green environment and chirping of birds will always make you feel attractive about this land.Tour Of Pristine KashmirKashmir is also a cheap and budget oriented destination where you will have lots of things to see and enjoy. It is considered as most appealing place amongst the other places of India. A week of Kashmir tour will give you lots of beautiful moment to cherish throughout your life. You tour will cover destination like Srinagar, Pahalgam and Gulmarg. Known as Queen of valleys, Kashmir is really a haven as a place and shares boarders with Ladakh and other states.Wildlife Sanctuaries And BeachesA tour to wildlife safaris also comes under cheap and best India tours. India is blessed with more than 441 wildlife sanctuaries and each sanctuary has its own identity. All these sanctuaries are known for preserving the finest species of animals. Amongst various National Parks, Periyar Tiger Reserve in Kerala, Jim Corbett National Park and Ranthambore National Park are quite famous. Adventure lovers can experience river rafting, mountaineering, Jeep Safari, Elephant Safari and etc.Cultural TourFrom the beginning, India is known as cultural hub of various rituals, customs, religions, languages and ethnicity. There are various places which displays ancient cultures of India through years of old monuments and forts. Some of these places are Delhi, Agra, Rajasthan, Karnataka, Madhya Pradesh and etc. Select one among these and enjoy your trip!how to plan for an error free vacation packagesVacation Packages are the most convenient and a cheap option when planning a holiday. Vacation packages are convenient because they are just a click away. They can be customized according to your needs. It involves less research as you have to choose from the options available.�airlines denying passage based on inappropriate attireDon�t assume you are going to be allowed to board your plane simply because you have purchased cheap airfare, whether cheap airplane tickets or cheap vacation packages.� finding cheap fares more challengingAirline ticket prices have increased and will go up even more this summer making it harder to find cheap travel options, particularly cheap airplane tickets and cheap vacation packages.� Airlines claim that soaring fuel prices are likely to cost them billions more than last year.the best holiday packages to fiji have a secret cheap flightsHoliday packages include lots of issues: the activities that you can do, the hotels you are going to stay at, the airline which will take you to the desired place. Some years ago all these concerns could preclude a trip to Fiji, however, nowadays cheap flights enable you to make it the best holiday ever. things to include in shimla cheap tour packagesBooking a less expensive tour package is a great way to save money while traveling to Shimla. However, one should also consider if their�Shimla cheap tour packages�can satisfy the necessities of a tour.With the difference in taste, choices also come differently. Most people wish to save money while a few feels better when they spend to enjoy at their comfort level. The same thing happens in the case of planning for a tour package to a place like Shimla. People book a wide variety of tour packages from the cheapest one to the most expensive. It is the wish for everyone to spend less during their travel. However, being able to save money doesn't mean that it can ensure pleasure throughout the journey. So, you should consider a few things to include before booking your tour packages, either the family package or�Kullu Manali and Shimla honeymoon package. Here are a few things that you need to examine with the facilities of the tour package.�Accommodation:It is quite fortunate that your stress for choosing where to stay will be handled by the tour package that you are booking. However, if the accommodation does not meet your desired expectations, the mood of your entire journey may get spoilt. So, it is always advisable to check if the accommodation that you are choosing, fits your desired standard or not. To avoid inconveniences, make a list of all the best option as you will have a wide range of options to pick your choice.Transportation:One of the most stressful things while planning for a tour to Shimla is the transportation arrangement. As sitting at your home and booking for the transportation cannot make you satisfy but there is no other option left. Though there is no direct flight service to Shimla, the tour agencies will arrange alternative means from nearby cities like Delhi or Chandigarh. So, it is always advisable to choose the best transport that will be convenient for you to travel.Sightseeing:People visit Shimla with a lot of expectations and plans. If you also have any desired location or sites to visit, don't feel hesitant to inquire about your preferred areas. Nothing can be more satisfying than completing the tour by visiting the desired places. So, always find the ways that can fulfill your wishes, or else you will end up with regrets.�Activities:Every year many packers and adventure lovers also visit Shimla both during the on-beat and off-beat season to enjoy activities like paragliding, ice skating, trekking, horse riding etc. During the on-beat season, you may not find vacancies due to a large crowd present in the sites. Therefore, you should always think of booking the activities you want to perform in advance.�Costs:It is always a wish for most people to get things at a cheap price. On the other hand, you should also be aware of the hidden costs that lie beneath your travel expenses. Many people get shocked when they find hidden charges beneath their hotel charges. To avoid such cases, you can have an in-depth discussion with the right package about the expected costs.�cheap seo packages effective waste time 306815cheap seo packages best deals 306811various types services included cheap seo packages 1267721where can find cheap seo packages 427868where you should search about cheap seo services packages 1482672you looking seo packages promote your website 704524info about cheap seo packages 529770cheap seo packages business websites 308584blogging how get best your seo packages 223820affordable cheap seo packages 849493seo packages start ups 319090find your site optimized cheap seo packages 302082ideas choose suitable seo packages 302059choose exclusive best seo packages your business 1525067various types seo packages available 755487Local SEO Somerset | +44 7976 625722Contact Us Today!+44 7976 62572249B Bridle Way, Barwick, Yeovil, BA22 9TNhttps://sites.google.com/site/localseoservicesgold/http://www.localseoservicescompany.co.uk/