Conversion Rate Optimization
2DaMax Marketing | (646) 762-7511
Contact Us Today!
(646) 762-7511
954 Lexington Ave New York, NY 10021
https://sites.google.com/site/newyorkdigitalmarketingagency/
https://www.2damaxmarketing.com/new-york-city-marketing
Conversion rate optimization
In internet marketing, and web analytics conversion optimization, or Conversion rate optimization (CRO) is a system for increasing the percentage of visitors to a website that convert into customers,[1] or more generally, take any desired action on a webpage.[2] It is commonly referred to as CRO.
Online Conversion rate optimization (or website optimization) was born out of the need of e-commerce marketers to improve their website's performance in the aftermath of the dot-com bubble, when tecnology companies started to be more aware about their spending, investing more in website analytics.
After the burst, with website creation being more accessible, tons of pages with bad user experience were created.
As competition grew on the web during the early 2000s, website analysis tools became available, and awareness of website usability grew, internet marketers were prompted to produce measurables for their tactics and improve their website's user experience.
In 2004, new tools enabled internet marketers to experiment with website design and content variations to determine which layouts, copy text, offers, and images perform best.
Testing started to be more accessible and known.
This form of optimization accelerated in 2007 with the introduction of the free tool Google Website Optimizer.[3] Today, optimization and conversion are key aspects of many digital marketing campaigns.
A research study conducted among internet marketers in 2017, for example, showed that 50% of respondents thought that CRO was "crucial to their overall digital marketing strategy".[4] Conversion rate optimization shares many principles with direct response marketing – a marketing approach that emphasizes tracking, testing, and on-going improvement.
Direct marketing was popularized in the early twentieth century and supported by the formation of industry groups such as the Direct Marketing Association, which formed in 1917.[5] Like modern day Conversion rate optimization, direct response marketers also practice A/B split-testing, response tracking, and audience testing to optimize mail, radio, and print campaigns.[6] Conversion rate optimization seeks to increase the percentage of website visitors that take a specific action (often submitting a web form, making a purchase, signing up for a trial, etc.) by methodically testing alternate versions of a page or process.[citation needed] In doing so, businesses are able to generate more leads or sales without investing more money on website traffic, hence increasing their marketing return on investment and overall profitability.[7] Statistical significance helps understand that the result of a test is not achieved merely on the basis of chance.
There are several approaches to conversion optimization with two main schools of thought prevailing in the last few years.
One school is more focused on testing to discover the best way to increase website, campaign, or landing page conversion rates.
The other school is focused on the pretesting stage of the optimization process.[8] In this second approach, the optimization company will invest a considerable amount of time understanding the audience and then creating a targeted message that appeals to that particular audience.
Only then would it be willing to deploy testing mechanisms to increase conversion rates.
A conversion rate is defined as the percentage of visitors who complete a goal, as set by the site owner.
It is calculated as the total number of conversions, divided by the total number people who visited your website.
For example: Your website receives 100 visitors in a day and 15 visitors sign up for your email newsletter (your chosen conversion to measure).
Your conversion rate would be 15% for that day.
Conversion marketing
In electronic commerce, Conversion marketing is marketing with the intention of increasing conversions--that is, site visitors who are paying customers.[1] The process of improving the conversion rate is called conversion rate optimization.
However, different sites may consider a "conversion" to be a result other than a sale.[2] Say a customer were to abandon an online shopping cart.
The company could market a special offer, like free shipping, to convert the visitor into a paying customer.
A company may also try to recover the customer through an online engagement method, such as proactive chat, to attempt to assist the customer through the purchase process.[3] The efficacy of Conversion marketing is measured by the conversion rate: the number of customers who have completed a transaction divided by the total number of website visitors.
Conversion rates for electronic storefronts are usually low.[4] Conversion marketing can boost this number as well as online revenue and website traffic.
Conversion marketing attempts to solve low online conversions through optimized customer service, which requires a complex combination of personalized customer experience management, web analytics, and the use of customer feedback to contribute to process flow improvement and site design.[5] By focusing on improving site flow, online customer service channels, and online experience Conversion marketing is commonly viewed as a long-term investment rather than a quick fix .[6] Increased site traffic over the past 10 years has done little to increase overall conversion rates, so Conversion marketing focuses not on driving additional traffic but converting existing traffic.
It requires proactive engagement with consumers using real time analytics to determine if visitors are confused and show signs of abandoning the site; then developing the tools and messages to inform consumers about available products, and ultimately persuading them to convert online.
Ideally, the customer would maintain a relationship post-sale through support or re-engagement campaigns.
Conversion marketing affects all phases of the customer life-cycle, and several Conversion marketing solutions are utilized to help ease the transition from one phase to the next.
The conversion rate is the proportion of visitors to a website who take action to go beyond a casual content view or website visit, as a result of subtle or direct requests from marketers, advertisers, and content creators.
Successful conversions are defined differently by individual marketers, advertisers, and content creators.
To online retailers, for example, a successful conversion may be defined as the sale of a product to a consumer whose interest in the item was initially sparked by clicking a banner advertisement.
To content creators, a successful conversion may refer to a membership registration, newsletter subscription, software download, or other activity.
For websites that seek to generate offline responses, for example telephone calls or foot traffic to a store, measuring conversion rates can be difficult because a phone call or personal visit is not automatically traced to its source, such as the Yellow Pages, website, or referral.
Possible solutions include asking each caller or shopper how they heard about the business and using a toll-free number on the website that forwards to the existing line.
For websites where the response occurs on the site itself, a conversion funnel can be set up in a site's analytics package to track user behavior.
Among many possible actions to increase the conversion rate, the most relevant may be:
App store optimization
App store optimization (ASO) is the process of improving the visibility of a mobile app (such as an iPhone, iPad, Android, BlackBerry or Windows Phone app) in an app store (such as the App Store for iOS, Google Play for Android, Windows Store for Windows Phone or BlackBerry World for BlackBerry).
Just like search engine optimization (SEO) is for websites, App store optimization is for mobile apps.
Specifically, App store optimization includes the process of ranking highly in an app store's search results and top charts rankings.
Additionally, App store optimization also encompasses activities focused on increasing the conversion of app store impressions into downloads (e.g.
A/B testing of screenshots), collectively referred to as Conversion Rate Optimization (CRO).[1] Earning an app store feature and web search app indexing are two additional activities which may be categorized within the remit of App store optimization.[2] Apple's iTunes App Store was launched July 10, 2008, along with the release of the iPhone 3G.[3] It currently supports iOS, including iPhone and iPad.
There is also a non-mobile app store for Macs.
Google's app store, Google Play, was launched September 23, 2008.[4] It was originally named Android Market and supports the Android operating system.
Since the launch of iTunes App Store and Google Play, there has been an explosion in both the number of app stores and the size of the stores (amount of apps and number of downloads).
In 2010, Apple's App Store grew to process US$1.78 billion worth of apps.[5] iTunes App Store had 435,000 apps as of July 11, 2011, while Google Play had 438,000 as of May 1, 2012.[6][7] By 2016, Apple's App Store had surpassed 2 million total apps and Apple had paid out close to $50 billion in revenue to developers.[8] Industry predictions estimate that by 2020, the App Store will hold over 5 million apps.[9] As the number of apps in app stores has grown, the possibility of anyone app being found has dropped.
This has led to the realization of how important it is to be noticed within an app store.
As marketers started working on ranking highly in top charts and search results, a new discipline was formed and some app marketers have reported success.
The first use of the term "App store optimization" to describe this new discipline appears to have been in a presentation by Johannes Borchardt on November 4, 2009.[10] It began to take hold as a standardized term not long after, with outlets such as Search Engine Watch and TechCrunch using the term by February, 2012.[11][12] App store optimization works by optimizing a target app's keyword metadata in order to earn higher ranks for relevant keywords in the search engine results page, as well as increasing the rate at which users decide to download that target app.
ASO marketers try to achieve goals, such as: Many ASO marketers categorize their work into two distinct processes: keyword optimization and conversion rate optimization.
One of the main jobs of an ASO marketer is to optimize the keywords in an app's metadata, so that the app store keyword ranking algorithms rank that app higher in the search engine results page for relevant keywords.
This is accomplished by ensuring that relevant and important keywords are found in an app's metadata, as well as adjusting the mix of keywords across an app's metadata elements in order to increase the ranking strength of target keywords.
[16] In order to increase the downloads of an app, an app's assets (e.g.
the icon, preview video, screenshots, etc.) must also be optimized.
It is recommended to measure the effect of these optimizations by creating different variations of each asset, showing each variation to users, and then comparing the conversion rate of each variant, in a process referred to as A/B testing.
Google Play facilitates this process by providing ASO marketers with an A/B testing platform built into the Google Play Console.
For other platforms such as the Apple App Store, ASO marketers can run A/B tests via 3rd party A/B testing tools, running a pre-post test (i.e.
pushing new assets live to the store and measuring the impact pre-and-post change), a country-by-country experiment (i.e.
testing different asset variations across similar countries, such as UK/AU/CA/US/NZ), or testing different variations via ad platforms such as Facebook Ads.[17] Many app marketers attempt to perform ASO in a way that most app stores would approve of and accept.
This is called "white hat" ASO and publicly covered by presentations, conferences.[18][19] Developers also use different platforms available to get their peers to rate their apps for them which provides great feedback.
Some app marketers, however, engage in what many call "black hat" ASO and are practices which the app stores do not condone.[20][21] Black hat ASO includes falsifying downloads or ratings and reviews, perhaps by using bots or other techniques to make app stores (and their users) believe an app is more important and influential than it actually is.
Apple has been proactively fighting against black hat ASO.
In February, 2012, Apple released a statement as reported by The New York Times "warning app makers that using third-party services to gain top placement in App Store charts could get them banned from the store."[22] Google followed Apple in 2015 and started manually reviewing apps, to enforce app quality and reduce black hat practices.[23] At WWDC 2017, Apple announced major changes to its App Store experience arriving with iOS 11.
The major implications of iOS 11 for ASO are as follows: Additionally, Apple now requires developers to use its iOS 10.3 in-app rating prompt, disallowing custom ratings prompts.[26]
Landing page
Abandonment rate
CRO
CRO, CRO, or CRO may refer to:
Frosmo
Frosmo is a software and service company founded in 2008.
The company holds headquarters in Helsinki, Finland, and it has additional offices also in Poland, United Kingdom, Singapore, Spain and Sweden.
The company actively develops and offers a JavaScript-based conversion rate optimization solution, which can "bypass any limitations of current CMS or eCommerce platforms".[3] Having founded a mobile gaming company DoDreams earlier, Mikael Gummerus learned that there's a gap in the gaming market for an intelligent personalization software.
In late 2008, Frosmo was launched as a gaming platform which could analyze gamers’ behavior and personalize the service accordingly.[3] Roughly half a year later, Frosmo secured an angel round from highly distinguished investors and individuals, such as Risto Siilasmaa.
The actual size of the investment round wasn't disclosed, but it was speculated to be "relatively large".[4] Although the gaming platform peaked at 750,000 monthly active users, the company quickly realized that the technology could be just as well implemented outside of that industry.
During 2010, Frosmo developed and launched Social Optimizer, their first product aimed for website personalization.[5] It was one of the first of such tools to make use of Facebook's like button plug-in, which was released in April of the same year.
The plug-in acts as a gateway for detailed visitor data, as VentureBeat[6] explains: The analytics are useful because the Facebook Like buttons have access to the information that people put on their Facebook pages.
You can thus aggregate the demographics of the people who Like your web site.
You can determine their age, their occupations, their job titles, and how often they visit your web site.
You can see what they look at and gather statistics on a daily basis that show what kind of audience your web site reaches.
In turn, you can take that aggregate data and share it with potential web site advertisers.While the data collection and analyzing capabilities of the company's software got positive feedback, the UI was deemed to have a poor design.
For example, a part of their solution called Social Optimizer, a tool responsible for displaying personalized messages, used pop-ups that were "too ugly to click".[5] In 2014, the company estimated it gained deals worth of 1 million euros from World Travel Market Travel Tech trade show alone.[7]
Conversion tracking
Website promotion
Website promotion is the continuing process used by webmasters to improve content and increase exposure of a website to bring more visitors.[1]:210 Many techniques such as search engine optimization and search engine submission are used to increase a site's traffic once content is developed.[1]:314 With the rise in popularity of social media platforms, many webmasters have moved to platforms like Facebook, Twitter and Instagram for viral marketing.
By sharing interesting content, webmasters hope that some of the audience will visit the website.
Examples of viral content are infographics and memes.
Webmasters often hire outsourced or offshore firms to perform Website promotion for them, many of whom provide "low-quality, assembly-line link building".[2]
Bounce rate
A
A or A is the first letter And the first vowel letter of the modern English AlphAbet And the ISO bAsic LAtin AlphAbet.[1] Its nAme in English is A (pronounced /ˈeɪ/), plurAl Aes.[nb 1] It is similAr in shApe to the Ancient Greek letter AlphA, from which it derives.[2] The uppercAse version consists of the two slAnting sides of A triAngle, crossed in the middle by A horizontAl bAr.
The lowercAse version cAn be written in two forms: the double-storey A And single-storey ɑ.
The lAtter is commonly used in hAndwriting And fonts bAsed on it, especiAlly fonts intended to be reAd by children, And is Also found in itAlic type.
In the English grAmmAr, "A", And its vAriAnt "An", Are indefinite Articles.
The eArliest certAin Ancestor of "A" is Aleph (Also written 'Aleph), the first letter of the PhoeniciAn AlphAbet,[3] which consisted entirely of consonAnts (for thAt reAson, it is Also cAlled An AbjAd to distinguish it from A true AlphAbet).
In turn, the Ancestor of Aleph mAy hAve been A pictogrAm of An ox heAd in proto-SinAitic script[4] influenced by EgyptiAn hieroglyphs, styled As A triAngulAr heAd with two horns extended.
By 1600 BC, the PhoeniciAn AlphAbet letter hAd A lineAr form thAt served As the bAse for some lAter forms.
Its nAme is thought to hAve corresponded closely to the PAleo-Hebrew or ArAbic Aleph.
When the Ancient Greeks Adopted the AlphAbet, they hAd no use for A letter to represent the glottAl stop—the consonAnt sound thAt the letter denoted in PhoeniciAn And other Semitic lAnguAges, And thAt wAs the first phoneme of the PhoeniciAn pronunciAtion of the letter—so they used their version of the sign to represent the vowel /A/, And cAlled it by the similAr nAme of AlphA.
In the eArliest Greek inscriptions After the Greek DArk Ages, dAting to the 8th century BC, the letter rests upon its side, but in the Greek AlphAbet of lAter times it generAlly resembles the modern cApitAl letter, Although mAny locAl vArieties cAn be distinguished by the shortening of one leg, or by the Angle At which the cross line is set.
The EtruscAns brought the Greek AlphAbet to their civilizAtion in the ItAliAn PeninsulA And left the letter unchAnged.
The RomAns lAter Adopted the EtruscAn AlphAbet to write the LAtin lAnguAge, And the resulting letter wAs preserved in the LAtin AlphAbet thAt would come to be used to write mAny lAnguAges, including English.
During RomAn times, there were mAny vAriAnt forms of the letter "A".
First wAs the monumentAl or lApidAry style, which wAs used when inscribing on stone or other "permAnent" mediA.
There wAs Also A cursive style used for everydAy or utilitAriAn writing, which wAs done on more perishAble surfAces.
Due to the "perishAble" nAture of these surfAces, there Are not As mAny exAmples of this style As there Are of the monumentAl, but there Are still mAny surviving exAmples of different types of cursive, such As mAjuscule cursive, minuscule cursive, And semicursive minuscule.
VAriAnts Also existed thAt were intermediAte between the monumentAl And cursive styles.
The known vAriAnts include the eArly semi-unciAl, the unciAl, And the lAter semi-unciAl.[5] At the end of the RomAn Empire (5th century AD), severAl vAriAnts of the cursive minuscule developed through Western Europe.
Among these were the semicursive minuscule of ItAly, the MerovingiAn script in FrAnce, the Visigothic script in SpAin, And the InsulAr or Anglo-Irish semi-unciAl or Anglo-SAxon mAjuscule of GreAt BritAin.
By the 9th century, the CAroline script, which wAs very similAr to the present-dAy form, wAs the principAl form used in book-mAking, before the Advent of the printing press.
This form wAs derived through A combining of prior forms.[5] 15th-century ItAly sAw the formAtion of the two mAin vAriAnts thAt Are known todAy.
These vAriAnts, the ItAlic And RomAn forms, were derived from the CAroline Script version.
The ItAlic form, Also cAlled script A, is used in most current hAndwriting And consists of A circle And verticAl stroke.
This slowly developed from the fifth-century form resembling the Greek letter tAu in the hAnds of medievAl Irish And English writers.[3] The RomAn form is used in most printed mAteriAl; it consists of A smAll loop with An Arc over it ("A").[5] Both derive from the mAjuscule (cApitAl) form.
In Greek hAndwriting, it wAs common to join the left leg And horizontAl stroke into A single loop, As demonstrAted by the unciAl version shown.
MAny fonts then mAde the right leg verticAl.
In some of these, the serif thAt begAn the right leg stroke developed into An Arc, resulting in the printed form, while in others it wAs dropped, resulting in the modern hAndwritten form.
ItAlic type is commonly used to mArk emphAsis or more generAlly to distinguish one pArt of A text from the rest (set in RomAn type).
There Are some other cAses Aside from itAlic type where script A ("ɑ"), Also cAlled LAtin AlphA, is used in contrAst with LAtin "A" (such As in the InternAtionAl Phonetic AlphAbet).
In modern English orthogrAphy, the letter ⟨A⟩ represents At leAst seven different vowel sounds: The double ⟨AA⟩ sequence does not occur in nAtive English words, but is found in some words derived from foreign lAnguAges such As AAron And AArdvArk.[6] However, ⟨A⟩ occurs in mAny common digrAphs, All with their own sound or sounds, pArticulArly ⟨Ai⟩, ⟨Au⟩, ⟨Aw⟩, ⟨Ay⟩, ⟨eA⟩ And ⟨oA⟩.
⟨A⟩ is the third-most-commonly used letter in English (After ⟨e⟩ And ⟨t⟩),[7] And the second most common in SpAnish And French.
In one study, on AverAge, About 3.68% of letters used in English texts tend to be ⟨A⟩, while the number is 6.22% in SpAnish And 3.95% in French.[8] In most lAnguAges thAt use the LAtin AlphAbet, ⟨A⟩ denotes An open unrounded vowel, such As /A/, /ä/, or /ɑ/.
An exception is SAAnich, in which ⟨A⟩ (And the glyph Á) stAnds for A close-mid front unrounded vowel /e/.
In phonetic And phonemic notAtion: In AlgebrA, the letter A Along with other letters At the beginning of the AlphAbet is used to represent known quAntities, whereAs the letters At the end of the AlphAbet (x, y, z) Are used to denote unknown quAntities.
In geometry, cApitAl A, B, C etc.
Are used to denote segments, lines, rAys, etc.[5] A cApitAl A is Also typicAlly used As one of the letters to represent An Angle in A triAngle, the lowercAse A representing the side opposite Angle A.[4] "A" is often used to denote something or someone of A better or more prestigious quAlity or stAtus: A-, A or A+, the best grAde thAt cAn be Assigned by teAchers for students' schoolwork; "A grAde" for cleAn restAurAnts; A-list celebrities, etc.
Such AssociAtions cAn hAve A motivAting effect, As exposure to the letter A hAs been found to improve performAnce, when compAred with other letters.[9] "A" is used As A prefix on some words, such As Asymmetry, to meAn "not" or "without" (from Greek).
In English grAmmAr, "A", And its vAriAnt "An", is An indefinite Article.
FinAlly, the letter A is used to denote size, As in A nArrow size shoe,[4] or A smAll cup size in A brAssiere.[10]
Google Website Optimizer
Google Website Optimizer was a free website optimization tool that helped online marketers and webmasters increase visitor conversion rates and overall visitor satisfaction by continually testing different combinations of website content.[1] The Google Website Optimizer could test any element that existed as HTML code on a page including calls to action, fonts, headlines, point of action assurances, product copy, product images, product reviews, and forms.
It allowed webmasters to test alternative versions of an entire page, called A/B testing — or test multiple combinations of page elements such as headings, images, or body copy; known as Multivariate testing.
It could be used at multiple stage in the conversion funnel.
On 1 June 2012, Google announced that GWO as a separate product would be retired as of 1 August 2012, and its functionality would be integrated into Google Analytics as Google Analytics Content Experiments.[2]
Conversion funnel
Conversion funnel is a phrase used in e-commerce to describe the journey a consumer takes through an Internet advertising or search system, navigating an e-commerce website and finally converting to a sale.
The metaphor of a funnel is used to describe the way users are guided to the goal with fewer navigation options at each step.
Using this metaphor, advertising efforts can be aimed at "upper funnel", "middle funnel", or "lower funnel" potential customers.[1] Typically a large number of customers search for a product/service or register as page view on a referring page which is linked to the e-commerce site by a banner ad, ad network or conventional link.
Only a small proportion of those seeing the advertisement or link actually click the link.
The metric used to describe this ratio is the click-through rate (CTR) and represents the top level of the funnel.
Typical banner and advertising click-through rates are 0.02% in late 2010 and have decreased over the past three years[citation needed].
Click-through rates are highly sensitive to small changes such as link text, link size, link position and many others and these effects interact cumulatively.
The process of understanding which creative material brings the highest click-through rate is known as ad optimization.
Once the link is clicked and the visitor to the referring page enters the e-commerce site itself, only a small proportion of visitors typically proceed to the product pages, creating further constriction of the metaphorical funnel.
Each step the visitor takes further reduces the number of visitors, typically by 30%–80% per page[citation needed].
Adding the product to the shopping cart, registering or filling in contact details and payment all further reduce the numbers step-by-step cumulatively along the funnel.
The more steps, the fewer visitors get through to becoming paying customers.
For this reason, sites with similar pricing and products can have hugely different conversion rates of visitors to customers and therefore greatly differing profits.
Formstack
Formstack is a data management system that helps users collect information through various types of online forms, including surveys, job applications, event registrations, and payment forms.
Founded in 2006, the company was created by Ade Olonoh, and serves over 500,000 users in 112 countries.[2] Formstack was founded by Ade Olonoh on February 28, 2006.
In November 2009, Formstack (then called FormSpring) launched formspring.me, a social question and answer site where users could ask questions, give answers, and learn more about their friends.
Because of the success of the site, formspring.me was spun out as a separate company in January 2010 with a different team and different resources.
To allow both companies to continue their growth, FormSpring.com became Formstack.com to avoid any confusion with formspring.me.[3] Originally created as a simple online form builder, Formstack has expanded its offerings to include workflow management systems for various industries, including marketing, education, healthcare, and human resources.
The company is headquartered in Indianapolis, IN with another office located in Colorado Springs, CO.[4] Over half of Formstack's workforce works remotely in regions across the United States, as well as Canada, Poland, Argentina, Mexico, and the Netherlands.
Formstack enables organizations to build and design web forms without any programming or software skills required.
Formstack users can embed mobile-ready forms on their websites and social media profiles, collect online payments, gather feedback from customers and employees, and create process workflows for their organization.[2] The platform integrates with over 50 web applications, including apps for CRM management, email marketing, payment processing, and document management.[5] Other product offerings include conversion rate optimization tools for marketers, a mobile app that allows users to collect data while offline, a fully native form-building app for Salesforce, and HIPAA compliance for healthcare and HR users.[6]
Sentient Technologies
Sentient Technologies is an American artificial intelligence (AI) based technology company based in San Francisco.[1] Sentient was founded in 2007 and has received over $143 million in funding since its inception.[2][3] As of 2016, Sentient is the world's most well-funded AI company.
Focused on e-commerce and online content as well as trading,[4] Sentient originally operated in stealth mode as Genetic Finance Holding Ltd.
The company was founded in 2007 by Antoine Blondeau, Babak Hodjat and Adam Cheyer who created the natural language technology that led to Siri, Apple's voice recognition software.[5][6] Sentient raised a $2 million Series A round of funding, and $38 million in a Series B round led by Horizons Ventures.[5] Sentient emerged from stealth mode in November 2014 with $103.5 million in Series C funding.[5][3][7] Sentient worked with Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Laboratory to analyze blood pressure to predict the likelihood of sepsis in ICU patients.[5][2] The technology was researched in association with Saint Michael's Hospital at the University of Toronto.[8] The platform has also been used to successfully automate financial services for Sentient's subsidiary, Sentient Investment Management.[9] In 2015, Sentient launched an AI powered product recommendation and personalization platform leveraging deep learning and online learning.
Shoes.com, a Vancouver-based online shoe retailer, was Sentient's first retail customer for this service.[8][10][11] Since its launch, Sentient has other retailer customers.[12][5][13] Sentient Ascend was launched in September 2016 as a SaaS AI based conversion rate optimization platform, largely based off the same AI methods used in its financial technology IP.[14][15] In 2019, Sentient Technologies was dissolved selling off Sentient Ascend to Evolv[16] and much of its AI intellectual property to Cognizant.[17] Sentient's platform[8][18] combines evolutionary computation, which mimics biological evolution, and deep learning, which is based on the structure of nervous systems.[5] Sentient's algorithms work across as many as two million CPU cores and 5000 GPU cards across 4,000 physical sites across the world, making it one of the largest known systems dedicated to AI.[4][19]
Feed conversion ratio
Multi-objective optimization
Multi-objective optimization (also known as multi-objective programming, vector optimization, multicriteria optimization, multiattribute optimization or Pareto optimization) is an area of multiple criteria decision making that is concerned with mathematical optimization problems involving more than one objective function to be optimized simultaneously.
Multi-objective optimization has been applied in many fields of science, including engineering, economics and logistics where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives.
Minimizing cost while maximizing comfort while buying a car, and maximizing performance whilst minimizing fuel consumption and emission of pollutants of a vehicle are examples of Multi-objective optimization problems involving two and three objectives, respectively.
In practical problems, there can be more than three objectives.
For a nontrivial Multi-objective optimization problem, no single solution exists that simultaneously optimizes each objective.
In that case, the objective functions are said to be conflicting, and there exists a (possibly infinite) number of Pareto optimal solutions.
A solution is called nondominated, Pareto optimal, Pareto efficient or noninferior, if none of the objective functions can be improved in value without degrading some of the other objective values.
Without additional subjective preference information, all Pareto optimal solutions are considered equally good.
Researchers study Multi-objective optimization problems from different viewpoints and, thus, there exist different solution philosophies and goals when setting and solving them.
The goal may be to find a representative set of Pareto optimal solutions, and/or quantify the trade-offs in satisfying the different objectives, and/or finding a single solution that satisfies the subjective preferences of a human decision maker (DM).
A Multi-objective optimization problem is an optimization problem that involves multiple objective functions.[1][2][3] In mathematical terms, a Multi-objective optimization problem can be formulated as where the integer k ≥ 2 {\displaystyle k\geq 2} is the number of objectives and the set X {\displaystyle X} is the feasible set of decision vectors.
The feasible set is typically defined by some constraint functions.
In addition, the vector-valued objective function is often defined as An element x ∗ ∈ X {\displaystyle x^{*}\in X} is called a feasible solution or a feasible decision.
A vector z ∗ := f ( x ∗ ) ∈ R k {\displaystyle z^{*}:=f(x^{*})\in \mathbb {R} ^{k}} for a feasible solution x ∗ {\displaystyle x^{*}} is called an objective vector or an outcome.
In Multi-objective optimization, there does not typically exist a feasible solution that minimizes all objective functions simultaneously.
Therefore, attention is paid to Pareto optimal solutions; that is, solutions that cannot be improved in any of the objectives without degrading at least one of the other objectives.
In mathematical terms, a feasible solution x 1 ∈ X {\displaystyle x^{1}\in X} is said to (Pareto) dominate another solution x 2 ∈ X {\displaystyle x^{2}\in X} , if A solution x ∗ ∈ X {\displaystyle x^{*}\in X} (and the corresponding outcome f ( x ∗ ) {\displaystyle f(x^{*})} ) is called Pareto optimal, if there does not exist another solution that dominates it.
The set of Pareto optimal outcomes is often called the Pareto front, Pareto frontier, or Pareto boundary.
The Pareto front of a Multi-objective optimization problem is bounded by a so-called nadir objective vector z n a d {\displaystyle z^{nad}} and an ideal objective vector z i d e a l {\displaystyle z^{ideal}} , if these are finite.
The nadir objective vector is defined as and the ideal objective vector as In other words, the components of a nadir and an ideal objective vector define upper and lower bounds for the objective function values of Pareto optimal solutions, respectively.
In practice, the nadir objective vector can only be approximated as, typically, the whole Pareto optimal set is unknown.
In addition, a utopian objective vector z u t o p i a n {\displaystyle z^{utopian}} with where ϵ > 0 {\displaystyle \epsilon >0} is a small constant, is often defined because of numerical reasons.
In economics, many problems involve multiple objectives along with constraints on what combinations of those objectives are attainable.
For example, consumer's demand for various goods is determined by the process of maximization of the utilities derived from those goods, subject to a constraint based on how much income is available to spend on those goods and on the prices of those goods.
This constraint allows more of one good to be purchased only at the sacrifice of consuming less of another good; therefore, the various objectives (more consumption of each good is preferred) are in conflict with each other.
A common method for analyzing such a problem is to use a graph of indifference curves, representing preferences, and a budget constraint, representing the trade-offs that the consumer is faced with.
Another example involves the production possibilities frontier, which specifies what combinations of various types of goods can be produced by a society with certain amounts of various resources.
The frontier specifies the trade-offs that the society is faced with — if the society is fully utilizing its resources, more of one good can be produced only at the expense of producing less of another good.
A society must then use some process to choose among the possibilities on the frontier.
Macroeconomic policy-making is a context requiring Multi-objective optimization.
Typically a central bank must choose a stance for monetary policy that balances competing objectives — low inflation, low unemployment, low balance of trade deficit, etc.
To do this, the central bank uses a model of the economy that quantitatively describes the various causal linkages in the economy; it simulates the model repeatedly under various possible stances of monetary policy, in order to obtain a menu of possible predicted outcomes for the various variables of interest.
Then in principle it can use an aggregate objective function to rate the alternative sets of predicted outcomes, although in practice central banks use a non-quantitative, judgement-based, process for ranking the alternatives and making the policy choice.
In finance, a common problem is to choose a portfolio when there are two conflicting objectives — the desire to have the expected value of portfolio returns be as high as possible, and the desire to have risk, often measured by the standard deviation of portfolio returns, be as low as possible.
This problem is often represented by a graph in which the efficient frontier shows the best combinations of risk and expected return that are available, and in which indifference curves show the investor's preferences for various risk-expected return combinations.
The problem of optimizing a function of the expected value (first moment) and the standard deviation (square root of the second central moment) of portfolio return is called a two-moment decision model.
In engineering and economics, many problems involve multiple objectives which are not describable as the-more-the-better or the-less-the-better; instead, there is an ideal target value for each objective, and the desire is to get as close as possible to the desired value of each objective.
For example, energy systems typically have a trade-off between performance and cost[4][5] or one might want to adjust a rocket's fuel usage and orientation so that it arrives both at a specified place and at a specified time; or one might want to conduct open market operations so that both the inflation rate and the unemployment rate are as close as possible to their desired values.
Often such problems are subject to linear equality constraints that prevent all objectives from being simultaneously perfectly met, especially when the number of controllable variables is less than the number of objectives and when the presence of random shocks generates uncertainty.
Commonly a multi-objective quadratic objective function is used, with the cost associated with an objective rising quadratically with the distance of the objective from its ideal value.
Since these problems typically involve adjusting the controlled variables at various points in time and/or evaluating the objectives at various points in time, intertemporal optimization techniques are employed.[6] Product and process design can be largely improved using modern modeling, simulation and optimization techniques.[citation needed] The key question in optimal design is the measure of what is good or desirable about a design.
Before looking for optimal designs it is important to identify characteristics which contribute the most to the overall value of the design.
A good design typically involves multiple criteria/objectives such as capital cost/investment, operating cost, profit, quality and/or recovery of the product, efficiency, process safety, operation time etc.
Therefore, in practical applications, the performance of process and product design is often measured with respect to multiple objectives.
These objectives typically are conflicting, i.e.
achieving the optimal value for one objective requires some compromise on one or more of other objectives.
For example, when designing a paper mill, one can seek to decrease the amount of capital invested in a paper mill and enhance the quality of paper simultaneously.
If the design of a paper mill is defined by large storage volumes and paper quality is defined by quality parameters, then the problem of optimal design of a paper mill can include objectives such as: i) minimization of expected variation of those quality parameter from their nominal values, ii) minimization of expected time of breaks and iii) minimization of investment cost of storage volumes.
Here, maximum volume of towers are design variables.
This example of optimal design of a paper mill is a simplification of the model used in.[7] Multi-objective design optimization have also been implemented in engineering systems in circumstances such as control cabinet layout optimization,[8] airfoil shape optimization using scientific workflows,[9] design of nano-CMOS semiconductors,[10] system on chip design, design of solar-powered irrigation systems,[11] optimization of sand mould systems,[12][13] engine design,[14][15] optimal sensor deployment[16] and optimal controller design.[17][18] Multi-objective optimization has been increasingly employed in chemical engineering and manufacturing.
In 2009, Fiandaca and Fraga used the multi-objective genetic algorithm (MOGA) to optimize the pressure swing adsorption process (cyclic separation process).
The design problem involved the dual maximization of nitrogen recovery and nitrogen purity.
The results provided a good approximation of the Pareto frontier with acceptable trade-offs between the objectives.[19] In 2010, Sendín et al.
solved a multi-objective problem for the thermal processing of food.
They tackled two case studies (bi-objective and triple objective problems) with nonlinear dynamic models and used a hybrid approach consisting of the weighted Tchebycheff and the Normal Boundary Intersection approach.
The novel hybrid approach was able to construct a Pareto optimal set for the thermal processing of foods.[20] In 2013, Ganesan et al.
carried out the Multi-objective optimization of the combined carbon dioxide reforming and partial-oxidation of methane.
The objective functions were methane conversion, carbon monoxide selectivity and hydrogen to carbon monoxide ratio.
Ganesan used the Normal Boundary Intersection (NBI) method in conjunction with two swarm-based techniques (Gravitational Search Algorithm (GSA) and Particle Swarm Optimization (PSO)) to tackle the problem.[21] Applications involving chemical extraction[22] and bioethanol production processes[23] have posed similar multi-objective problems.
In 2013, Abakarov et al proposed an alternative technique to solve Multi-objective optimization problems arising in food engineering.[24] The Aggregating Functions Approach, the Adaptive Random Search Algorithm, and the Penalty Functions Approach were used to compute the initial set of the non-dominated or Pareto-optimal solutions.
The Analytic Hierarchy Process and Tabular Method were used simultaneously for choosing the best alternative among the computed subset of non-dominated solutions for osmotic dehydration processes.[25] In 2018, Pearce et al.
formulated task allocation to human and robotic workers as a Multi-objective optimization problem, considering production time and the ergonomic impact on the human worker as the two objectives considered in the formulation.
Their approach used a Mixed-Integer Linear Program to solve the optimization problem for a weighted sum of the two objectives to calculate a set of Pareto optimal solutions.
The application of the approach to several manufacturing tasks showed improvements in at least one objective in most tasks and in both objectives in some of the processes.[26] The purpose of radio resource management is to satisfy the data rates that are requested by the users of a cellular network.[27] The main resources are time intervals, frequency blocks, and transmit powers.
Each user has its own objective function that, for example, can represent some combination of the data rate, latency, and energy efficiency.
These objectives are conflicting since the frequency resources are very scarce, thus there is a need for tight spatial frequency reuse which causes immense inter-user interference if not properly controlled.
Multi-user MIMO techniques are nowadays used to reduce the interference by adaptive precoding.
The network operator would like to both bring great coverage and high data rates, thus the operator would like to find a Pareto optimal solution that balance the total network data throughput and the user fairness in an appropriate subjective manner.
Radio resource management is often solved by scalarization; that is, selection of a network utility function that tries to balance throughput and user fairness.
The choice of utility function has a large impact on the computational complexity of the resulting single-objective optimization problem.[27] For example, the common utility of weighted sum rate gives an NP-hard problem with a complexity that scales exponentially with the number of users, while the weighted max-min fairness utility results in a quasi-convex optimization problem with only a polynomial scaling with the number of users.[28] Reconfiguration, by exchanging the functional links between the elements of the system, represents one of the most important measures which can improve the operational performance of a distribution system.
The problem of optimization through the reconfiguration of a power distribution system, in terms of its definition, is a historical single objective problem with constraints.
Since 1975, when Merlin and Back [29] introduced the idea of distribution system reconfiguration for active power loss reduction, until nowadays, a lot of researchers have proposed diverse methods and algorithms to solve the reconfiguration problem as a single objective problem.
Some authors have proposed Pareto optimality based approaches (including active power losses and reliability indices as objectives).
For this purpose, different artificial intelligence based methods have been used: microgenetic,[30] branch exchange,[31] particle swarm optimization [32] and non-dominated sorting genetic algorithm.[33] Autonomous inspection of infrastructure has the potential to reduce costs, risks and environmental impacts, as well as ensuring better periodic maintenance of inspected assets.
Typically, planning such missions has been viewed as a single-objective optimization problem, where one aims to minimize the energy or time spent in inspecting an entire target structure.[34] For complex, real-world structures, however, covering 100% of an inspection target is not feasible, and generating an inspection plan may be better viewed as a multiobjective optimization problem, where one aims to both maximize inspection coverage and minimize time and costs.
A recent study has indicated that multiobjective inspection planning indeed has the potential to outperform traditional methods on complex structures[35] As there usually exist multiple Pareto optimal solutions for Multi-objective optimization problems, what it means to solve such a problem is not as straightforward as it is for a conventional single-objective optimization problem.
Therefore, different researchers have defined the term "solving a Multi-objective optimization problem" in various ways.
This section summarizes some of them and the contexts in which they are used.
Many methods convert the original problem with multiple objectives into a single-objective optimization problem.
This is called a scalarized problem.
If Pareto optimality of the solutions obtained can be guaranteed, the scalarization is characterized as done neatly.
Solving a Multi-objective optimization problem is sometimes understood as approximating or computing all or a representative set of Pareto optimal solutions.[36][37] When decision making is emphasized, the objective of solving a Multi-objective optimization problem is referred to supporting a decision maker in finding the most preferred Pareto optimal solution according to his/her subjective preferences.[1][38] The underlying assumption is that one solution to the problem must be identified to be implemented in practice.
Here, a human decision maker (DM) plays an important role.
The DM is expected to be an expert in the problem domain.
The most preferred results can be found using different philosophies.
Multi-objective optimization methods can be divided into four classes.[2] In so-called no preference methods, no DM is expected to be available, but a neutral compromise solution is identified without preference information.[1] The other classes are so-called a priori, a posteriori and interactive methods and they all involve preference information from the DM in different ways.
In a priori methods, preference information is first asked from the DM and then a solution best satisfying these preferences is found.
In a posteriori methods, a representative set of Pareto optimal solutions is first found and then the DM must choose one of them.
In interactive methods, the decision maker is allowed to iteratively search for the most preferred solution.
In each iteration of the interactive method, the DM is shown Pareto optimal solution(s) and describes how the solution(s) could be improved.
The information given by the decision maker is then taken into account while generating new Pareto optimal solution(s) for the DM to study in the next iteration.
In this way, the DM learns about the feasibility of his/her wishes and can concentrate on solutions that are interesting to him/her.
The DM may stop the search whenever he/she wants to.
More information and examples of different methods in the four classes are given in the following sections.
Scalarizing a Multi-objective optimization problem is an a priori method, which means formulating a single-objective optimization problem such that optimal solutions to the single-objective optimization problem are Pareto optimal solutions to the Multi-objective optimization problem.[2] In addition, it is often required that every Pareto optimal solution can be reached with some parameters of the scalarization.[2] With different parameters for the scalarization, different Pareto optimal solutions are produced.
A general formulation for a scalarization of a multiobjective optimization is thus where θ {\displaystyle \theta } is a vector parameter, the set X θ ⊆ X {\displaystyle X_{\theta }\subseteq X} is a set depending on the parameter θ {\displaystyle \theta } and g : R k + 1 ↦ R {\displaystyle g:\mathbb {R} ^{k+1}\mapsto \mathbb {R} } is a function.
Very well-known examples are the so-called Somewhat more advanced examples are the: max ∑ j = 1 r Z j W j − ∑ j = r + 1 s Z j W r + 1 s.t. A X = b X ≥ 0 , {\displaystyle {\begin{array}{ll}\max &{\frac {\sum _{j=1}^{r}Z_{j}}{W_{j}}}-{\frac {\sum _{j=r+1}^{s}Z_{j}}{W_{r+1}}}\\{\text{s.t.
}}&AX=b\\&X\geq 0,\end{array}}} For example, portfolio optimization is often conducted in terms of mean-variance analysis.
In this context, the efficient set is a subset of the portfolios parametrized by the portfolio mean return μ P {\displaystyle \mu _{P}} in the problem of choosing portfolio shares so as to minimize the portfolio's variance of return σ P {\displaystyle \sigma _{P}} subject to a given value of μ P {\displaystyle \mu _{P}} ; see Mutual fund separation theorem for details.
Alternatively, the efficient set can be specified by choosing the portfolio shares so as to maximize the function μ P − b σ P {\displaystyle \mu _{P}-b\sigma _{P}} ; the set of efficient portfolios consists of the solutions as b ranges from zero to infinity.
When a decision maker does not explicitly articulate any preference information the Multi-objective optimization method can be classified as no-preference method.[2] A well-known example is the method of global criterion,[41] in which a scalarized problem of the form is solved.
In the above problem, ‖ ⋅ ‖ {\displaystyle \|\cdot \|} can be any L p {\displaystyle L_{p}} norm, with common choices including L 1 {\displaystyle L_{1}} , L 2 {\displaystyle L_{2}} and L ∞ {\displaystyle L_{\infty }} .[1] The method of global criterion is sensitive to the scaling of the objective functions, and thus, it is recommended that the objectives are normalized into a uniform, dimensionless scale.[1][38] A priori methods require that sufficient preference information is expressed before the solution process.[2] Well-known examples of a priori methods include the utility function method, lexicographic method, and goal programming.
In the utility function method, it is assumed that the decision maker's utility function is available.
A mapping u : Y → R {\displaystyle u\colon Y\rightarrow \mathbb {R} } is a utility function if for all y 1 , y 2 ∈ Y {\displaystyle \mathbf {y} ^{1},\mathbf {y} ^{2}\in Y} if it holds that u ( y 1 ) > u ( y 2 ) {\displaystyle u(\mathbf {y} ^{1})>u(\mathbf {y} ^{2})} if the decision maker prefers y 1 {\displaystyle \mathbf {y} ^{1}} to y 2 {\displaystyle \mathbf {y} ^{2}} , and u ( y 1 ) = u ( y 2 ) {\displaystyle u(\mathbf {y} ^{1})=u(\mathbf {y} ^{2})} if the decision maker is indifferent between y 1 {\displaystyle \mathbf {y} ^{1}} and y 2 {\displaystyle \mathbf {y} ^{2}} .
The utility function specifies an ordering of the decision vectors (recall that vectors can be ordered in many different ways).
Once u {\displaystyle u} is obtained, it suffices to solve but in practice it is very difficult to construct a utility function that would accurately represent the decision maker's preferences[1] - particularly since the Pareto front is unknown before the optimization begins.
The lexicographic method assumes that the objectives can be ranked in the order of importance.
We can assume, without loss of generality, that the objective functions are in the order of importance so that f 1 {\displaystyle f_{1}} is the most important and f k {\displaystyle f_{k}} the least important to the decision maker.
The lexicographic method consists of solving a sequence of single-objective optimization problems of the form where y j ∗ {\displaystyle \mathbf {y} _{j}^{*}} is the optimal value of the above problem with l = j {\displaystyle l=j} .
Thus, y 1 ∗ := min { f 1 ( x ) ∣ x ∈ X } {\displaystyle \mathbf {y} _{1}^{*}:=\min\{f_{1}(\mathbf {x} )\mid \mathbf {x} \in X\}} and each new problem of the form in the above problem in the sequence adds one new constraint as l {\displaystyle l} goes from 1 {\displaystyle 1} to k {\displaystyle k} .
Note that a goal or target value is not specified for any objective here, which makes it different from the Lexicographic Goal Programming method.
A posteriori methods aim at producing all the Pareto optimal solutions or a representative subset of the Pareto optimal solutions.
Most a posteriori methods fall into either one of the following two classes: mathematical programming-based a posteriori methods, where an algorithm is repeated and each run of the algorithm produces one Pareto optimal solution, and evolutionary algorithms where one run of the algorithm produces a set of Pareto optimal solutions.
Well-known examples of mathematical programming-based a posteriori methods are the Normal Boundary Intersection (NBI),[42] Modified Normal Boundary Intersection (NBIm) [43] Normal Constraint (NC),[44][45] Successive Pareto Optimization (SPO)[46] and Directed Search Domain (DSD)[47] methods that solve the Multi-objective optimization problem by constructing several scalarizations.
The solution to each scalarization yields a Pareto optimal solution, whether locally or globally.
The scalarizations of the NBI, NBIm, NC and DSD methods are constructed with the target of obtaining evenly distributed Pareto points that give a good evenly distributed approximation of the real set of Pareto points.
Evolutionary algorithms are popular approaches to generating Pareto optimal solutions to a Multi-objective optimization problem.
Currently, most evolutionary Multi-objective optimization (EMO) algorithms apply Pareto-based ranking schemes.
Evolutionary algorithms such as the Non-dominated Sorting Genetic Algorithm-II (NSGA-II) [48] and Strength Pareto Evolutionary Algorithm 2 (SPEA-2)[49] have become standard approaches, although some schemes based on particle swarm optimization and simulated annealing[50] are significant.
The main advantage of evolutionary algorithms, when applied to solve Multi-objective optimization problems, is the fact that they typically generate sets of solutions, allowing computation of an approximation of the entire Pareto front.
The main disadvantage of evolutionary algorithms is their lower speed and the Pareto optimality of the solutions cannot be guaranteed.
It is only known that none of the generated solutions dominates the others.
Another paradigm for Multi-objective optimization based on novelty using evolutionary algorithms was recently improved upon.[51] This paradigm searches for novel solutions in objective space (i.e., novelty search[52] on objective space) in addition to the search for non-dominated solutions.
Novelty search is like stepping stones guiding the search to previously unexplored places.
It is especially useful in overcoming bias and plateaus as well as guiding the search in many-objective optimization problems.
Commonly known a posteriori methods are listed below: In interactive methods of optimizing multiple objective problems, the solution process is iterative and the decision maker continuously interacts with the method when searching for the most preferred solution (see e.g.
Miettinen 1999,[1] Miettinen 2008[63]).
In other words, the decision maker is expected to express preferences at each iteration in order to get Pareto optimal solutions that are of interest to the decision maker and learn what kind of solutions are attainable.
The following steps are commonly present in interactive methods of optimization :[63] The above aspiration levels refer to desirable objective function values forming a reference point.
Instead of mathematical convergence that is often used as a stopping criterion in mathematical optimization methods, a psychological convergence is often emphasized in interactive methods.
Generally speaking, a method is terminated when the decision maker is confident that he/she has found the most preferred solution available.
There are different interactive methods involving different types of preference information.
Three of those types can be identified based on On the other hand, a fourth type of generating a small sample of solutions is included:[64][65] An example of interactive method utilizing trade-off information is the Zionts-Wallenius method,[66] where the decision maker is shown several objective trade-offs at each iteration, and (s)he is expected to say whether (s)he likes, dislikes or is indifferent with respect to each trade-off.
In reference point based methods (see e.g.[67][68]), the decision maker is expected at each iteration to specify a reference point consisting of desired values for each objective and a corresponding Pareto optimal solution(s) is then computed and shown to him/her for analysis.
In classification based interactive methods, the decision maker is assumed to give preferences in the form of classifying objectives at the current Pareto optimal solution into different classes indicating how the values of the objectives should be changed to get a more preferred solution.
Then, the classification information given is taken into account when new (more preferred) Pareto optimal solution(s) are computed.
In the satisficing trade-off method (STOM)[69] three classes are used: objectives whose values 1) should be improved, 2) can be relaxed, and 3) are acceptable as such.
In the NIMBUS method,[70][71] two additional classes are also used: objectives whose values 4) should be improved until a given bound and 5) can be relaxed until a given bound.
Different hybrid methods exist, but here we consider hybridizing MCDM (multi-criteria decision making) and EMO (evolutionary Multi-objective optimization).
A hybrid algorithm in the context of Multi-objective optimization is a combination of algorithms/approaches from these two fields (see e.g.[63]).
Hybrid algorithms of EMO and MCDM are mainly used to overcome shortcomings by utilizing strengths.
Several types of hybrid algorithms have been proposed in the literature, e.g.
incorporating MCDM approaches into EMO algorithms as a local search operator and to lead a DM to the most preferred solution(s) etc.
A local search operator is mainly used to enhance the rate of convergence of EMO algorithms.
The roots for hybrid Multi-objective optimization can be traced to the first Dagstuhl seminar organized in November 2004 (see, here).
Here some of the best minds[citation needed] in EMO (Professor Kalyanmoy Deb, Professor Jürgen Branke etc.) and MCDM (Professor Kaisa Miettinen, Professor Ralph E.
Steuer etc.) realized the potential in combining ideas and approaches of MCDM and EMO fields to prepare hybrids of them.
Subsequently many more Dagstuhl seminars have been arranged to foster collaboration.
Recently, hybrid Multi-objective optimization has become an important theme in several international conferences in the area of EMO and MCDM (see e.g.[72][73]) Visualization of the Pareto front is one of the a posteriori preference techniques of Multi-objective optimization.
The a posteriori preference techniques provide an important class of Multi-objective optimization techniques.[1] Usually the a posteriori preference techniques include four steps: (1) computer approximates the Pareto front, i.e.
the Pareto optimal set in the objective space; (2) the decision maker studies the Pareto front approximation; (3) the decision maker identifies the preferred point at the Pareto front; (4) computer provides the Pareto optimal decision, which output coincides with the objective point identified by the decision maker.
From the point of view of the decision maker, the second step of the a posteriori preference techniques is the most complicated one.
There are two main approaches to informing the decision maker.
First, a number of points of the Pareto front can be provided in the form of a list (interesting discussion and references are given in[74]) or using Heatmaps.[75] In the case of bi-objective problems, informing the decision maker concerning the Pareto front is usually carried out by its visualization: the Pareto front, often named the tradeoff curve in this case, can be drawn at the objective plane.
The tradeoff curve gives full information on objective values and on objective tradeoffs, which inform how improving one objective is related to deteriorating the second one while moving along the tradeoff curve.
The decision maker takes this information into account while specifying the preferred Pareto optimal objective point.
The idea to approximate and visualize the Pareto front was introduced for linear bi-objective decision problems by S.Gass and T.Saaty.[76] This idea was developed and applied in environmental problems by J.L.
Cohon.[77] A review of methods for approximating the Pareto front for various decision problems with a small number of objectives (mainly, two) is provided in.[78] There are two generic ideas on how to visualize the Pareto front in high-order multi-objective decision problems (problems with more than two objectives).
One of them, which is applicable in the case of a relatively small number of objective points that represent the Pareto front, is based on using the visualization techniques developed in statistics (various diagrams, etc.
– see the corresponding subsection below).
The second idea proposes the display of bi-objective cross-sections (slices) of the Pareto front.
It was introduced by W.S.
Meisel in 1973[79] who argued that such slices inform the decision maker on objective tradeoffs.
The figures that display a series of bi-objective slices of the Pareto front for three-objective problems are known as the decision maps.
They give a clear picture of tradeoffs between three criteria.
Disadvantages of such an approach are related to two following facts.
First, the computational procedures for constructing the bi-objective slices of the Pareto front are not stable since the Pareto front is usually not stable.
Secondly, it is applicable in the case of only three objectives.
In the 1980s, the idea W.S.
Meisel of implemented in a different form – in the form of the Interactive Decision Maps (IDM) technique.[80] More recently N.
Wesner[81] proposed to use a combination of a Venn diagramm and multiple scatterplots views of the objective space for the exploration of the Pareto frontier and the selection of optimal solutions.
Conversion path
A Conversion path is a description of the steps taken by a user of a website towards a desired end from the standpoint of the website operator or marketer.
The typical Conversion path begins with a user arriving at a landing page and proceeding through a series of page transitions until reaching a final state, either positive (e.g.
purchase) or negative (e.g.
abandoned session).
In practice, the study of the dynamics of this process by the interested party has evolved into a sophisticated field, where various statistical methods are being applied to the optimization of outcomes.
This includes real-time adjustment of presented content, in which a website operator tries to provide deliberate incentives to increase the odds of conversion based on various sources of information, including demographic traits, search history and browsing events.
In practice this reflects in different content presented to users arriving from online advertising versus search engines, and similarly different content is presented depending on their demographic segments.
The fundamental metric describing this process in the aggregate is known as conversion rate.
Search engine optimization
Search engine optimization (SEO) is the process of growing the quality and quantity of website traffic by increasing the visibility of a website or a web page to users of a web search engine.[1] SEO refers to the improvement of unpaid results (known as "natural" or "organic" results) and excludes direct traffic and the purchase of paid placement.
Additionally, it may target different kinds of searches, including image search, video search, academic search,[2] news search, and industry-specific vertical search engines.
Promoting a site to increase the number of backlinks, or inbound links, is another SEO tactic.
By May 2015, mobile search had surpassed desktop search.[3] As an Internet marketing strategy, SEO considers how search engines work, the computer-programmed algorithms that dictate search engine behavior, what people search for, the actual search terms or keywords typed into search engines, and which search engines are preferred by their targeted audience.
SEO is performed because a website will receive more visitors from a search engine when website ranks are higher in the search engine results page (SERP).
These visitors can then be converted into customers.[4] SEO differs from local Search engine optimization in that the latter is focused on optimizing a business' online presence so that its web pages will be displayed by search engines when a user enters a local search for its products or services.
The former instead is more focused on national or international searches.
Webmasters and content providers began optimizing websites for search engines in the mid-1990s, as the first search engines were cataloging the early Web.
Initially, all webmasters only needed to submit the address of a page, or URL, to the various engines which would send a web crawler to crawl that page, extract links to other pages from it, and return information found on the page to be indexed.[5] The process involves a search engine spider downloading a page and storing it on the search engine's own server.
A second program, known as an indexer, extracts information about the page, such as the words it contains, where they are located, and any weight for specific words, as well as all links the page contains.
All of this information is then placed into a scheduler for crawling at a later date.
Website owners recognized the value of a high ranking and visibility in search engine results,[6] creating an opportunity for both white hat and black hat SEO practitioners.
According to industry analyst Danny Sullivan, the phrase "Search engine optimization" probably came into use in 1997.
Sullivan credits Bruce Clay as one of the first people to popularize the term.[7] On May 2, 2007,[8] Jason Gambert attempted to trademark the term SEO by convincing the Trademark Office in Arizona[9] that SEO is a "process" involving manipulation of keywords and not a "marketing service." Early versions of search algorithms relied on webmaster-provided information such as the keyword meta tag or index files in engines like ALIWEB.
Meta tags provide a guide to each page's content.
Using metadata to index pages was found to be less than reliable, however, because the webmaster's choice of keywords in the meta tag could potentially be an inaccurate representation of the site's actual content.
Inaccurate, incomplete, and inconsistent data in meta tags could and did cause pages to rank for irrelevant searches.[10][dubious – discuss] Web content providers also manipulated some attributes within the HTML source of a page in an attempt to rank well in search engines.[11] By 1997, search engine designers recognized that webmasters were making efforts to rank well in their search engine, and that some webmasters were even manipulating their rankings in search results by stuffing pages with excessive or irrelevant keywords.
Early search engines, such as Altavista and Infoseek, adjusted their algorithms to prevent webmasters from manipulating rankings.[12] By relying so much on factors such as keyword density which were exclusively within a webmaster's control, early search engines suffered from abuse and ranking manipulation.
To provide better results to their users, search engines had to adapt to ensure their results pages showed the most relevant search results, rather than unrelated pages stuffed with numerous keywords by unscrupulous webmasters.
This meant moving away from heavy reliance on term density to a more holistic process for scoring semantic signals.[13] Since the success and popularity of a search engine is determined by its ability to produce the most relevant results to any given search, poor quality or irrelevant search results could lead users to find other search sources.
Search engines responded by developing more complex ranking algorithms, taking into account additional factors that were more difficult for webmasters to manipulate.
In 2005, an annual conference, AIRWeb (Adversarial Information Retrieval on the Web), was created to bring together practitioners and researchers concerned with Search engine optimization and related topics.[14] Companies that employ overly aggressive techniques can get their client websites banned from the search results.
In 2005, the Wall Street Journal reported on a company, Traffic Power, which allegedly used high-risk techniques and failed to disclose those risks to its clients.[15] Wired magazine reported that the same company sued blogger and SEO Aaron Wall for writing about the ban.[16] Google's Matt Cutts later confirmed that Google did in fact ban Traffic Power and some of its clients.[17] Some search engines have also reached out to the SEO industry, and are frequent sponsors and guests at SEO conferences, webchats, and seminars.
Major search engines provide information and guidelines to help with website optimization.[18][19] Google has a Sitemaps program to help webmasters learn if Google is having any problems indexing their website and also provides data on Google traffic to the website.[20] Bing Webmaster Tools provides a way for webmasters to submit a sitemap and web feeds, allows users to determine the "crawl rate", and track the web pages index status.
In 2015, it was reported that Google was developing and promoting mobile search as a key feature within future products.
In response, many brands began to take a different approach to their Internet marketing strategies.[21] In 1998, two graduate students at Stanford University, Larry Page and Sergey Brin, developed "Backrub", a search engine that relied on a mathematical algorithm to rate the prominence of web pages.
The number calculated by the algorithm, PageRank, is a function of the quantity and strength of inbound links.[22] PageRank estimates the likelihood that a given page will be reached by a web user who randomly surfs the web, and follows links from one page to another.
In effect, this means that some links are stronger than others, as a higher PageRank page is more likely to be reached by the random web surfer.
Page and Brin founded Google in 1998.[23] Google attracted a loyal following among the growing number of Internet users, who liked its simple design.[24] Off-page factors (such as PageRank and hyperlink analysis) were considered as well as on-page factors (such as keyword frequency, meta tags, headings, links and site structure) to enable Google to avoid the kind of manipulation seen in search engines that only considered on-page factors for their rankings.
Although PageRank was more difficult to game, webmasters had already developed link building tools and schemes to influence the Inktomi search engine, and these methods proved similarly applicable to gaming PageRank.
Many sites focused on exchanging, buying, and selling links, often on a massive scale.
Some of these schemes, or link farms, involved the creation of thousands of sites for the sole purpose of link spamming.[25] By 2004, search engines had incorporated a wide range of undisclosed factors in their ranking algorithms to reduce the impact of link manipulation.
In June 2007, The New York Times' Saul Hansell stated Google ranks sites using more than 200 different signals.[26] The leading search engines, Google, Bing, and Yahoo, do not disclose the algorithms they use to rank pages.
Some SEO practitioners have studied different approaches to Search engine optimization, and have shared their personal opinions.[27] Patents related to search engines can provide information to better understand search engines.[28] In 2005, Google began personalizing search results for each user.
Depending on their history of previous searches, Google crafted results for logged in users.[29] In 2007, Google announced a campaign against paid links that transfer PageRank.[30] On June 15, 2009, Google disclosed that they had taken measures to mitigate the effects of PageRank sculpting by use of the nofollow attribute on links.
Matt Cutts, a well-known software engineer at Google, announced that Google Bot would no longer treat any nofollow links, in the same way, to prevent SEO service providers from using nofollow for PageRank sculpting.[31] As a result of this change the usage of nofollow led to evaporation of PageRank.
In order to avoid the above, SEO engineers developed alternative techniques that replace nofollowed tags with obfuscated JavaScript and thus permit PageRank sculpting.
Additionally several solutions have been suggested that include the usage of iframes, Flash and JavaScript.[32] In December 2009, Google announced it would be using the web search history of all its users in order to populate search results.[33] On June 8, 2010 a new web indexing system called Google Caffeine was announced.
Designed to allow users to find news results, forum posts and other content much sooner after publishing than before, Google Caffeine was a change to the way Google updated its index in order to make things show up quicker on Google than before.
According to Carrie Grimes, the software engineer who announced Caffeine for Google, "Caffeine provides 50 percent fresher results for web searches than our last index..."[34] Google Instant, real-time-search, was introduced in late 2010 in an attempt to make search results more timely and relevant.
Historically site administrators have spent months or even years optimizing a website to increase search rankings.
With the growth in popularity of social media sites and blogs the leading engines made changes to their algorithms to allow fresh content to rank quickly within the search results.[35] In February 2011, Google announced the Panda update, which penalizes websites containing content duplicated from other websites and sources.
Historically websites have copied content from one another and benefited in search engine rankings by engaging in this practice.
However, Google implemented a new system which punishes sites whose content is not unique.[36] The 2012 Google Penguin attempted to penalize websites that used manipulative techniques to improve their rankings on the search engine.[37] Although Google Penguin has been presented as an algorithm aimed at fighting web spam, it really focuses on spammy links[38] by gauging the quality of the sites the links are coming from.
The 2013 Google Hummingbird update featured an algorithm change designed to improve Google's natural language processing and semantic understanding of web pages.
Hummingbird's language processing system falls under the newly recognized term of "conversational search" where the system pays more attention to each word in the query in order to better match the pages to the meaning of the query rather than a few words.[39] With regards to the changes made to Search engine optimization, for content publishers and writers, Hummingbird is intended to resolve issues by getting rid of irrelevant content and spam, allowing Google to produce high-quality content and rely on them to be 'trusted' authors.
In October 2019, Google announced they would start applying BERT models for English language search queries in the US.
Bidirectional Encoder Representations from Transformers (BERT) was another attempt by Google to improve their natural language processing but this time in order to better understand the search queries of their users.[40] In terms of Search engine optimization, BERT intended to connect users more easily to relevant content and increase the quality of traffic coming to websites that are ranking in the Search Engine Results Page.
The leading search engines, such as Google, Bing and Yahoo!, use crawlers to find pages for their algorithmic search results.
Pages that are linked from other search engine indexed pages do not need to be submitted because they are found automatically.
The Yahoo! Directory and DMOZ, two major directories which closed in 2014 and 2017 respectively, both required manual submission and human editorial review.[41] Google offers Google Search Console, for which an XML Sitemap feed can be created and submitted for free to ensure that all pages are found, especially pages that are not discoverable by automatically following links[42] in addition to their URL submission console.[43] Yahoo! formerly operated a paid submission service that guaranteed crawling for a cost per click;[44] however, this practice was discontinued in 2009.
Search engine crawlers may look at a number of different factors when crawling a site.
Not every page is indexed by the search engines.
The distance of pages from the root directory of a site may also be a factor in whether or not pages get crawled.[45] Today, most people are searching on Google using a mobile device.[46] In November 2016, Google announced a major change to the way crawling websites and started to make their index mobile-first, which means the mobile version of a given website becomes the starting point for what Google includes in their index.[47] In May 2019, Google updated the rendering engine of their crawler to be the latest version of Chromium (74 at the time of the announcement).
Google indicated that they would regularly update the Chromium rendering engine to the latest version.[48] In December 2019, Google began updating the User-Agent string of their crawler to reflect the latest Chrome version used by their rendering service.
The delay was to allow webmasters time to update their code that responded to particular bot User-Agent strings.
Google ran evaluations and felt confident the impact would be minor.[49] To avoid undesirable content in the search indexes, webmasters can instruct spiders not to crawl certain files or directories through the standard robots.txt file in the root directory of the domain.
Additionally, a page can be explicitly excluded from a search engine's database by using a meta tag specific to robots (usually <meta name="robots" content="noindex"> ).
When a search engine visits a site, the robots.txt located in the root directory is the first file crawled.
The robots.txt file is then parsed and will instruct the robot as to which pages are not to be crawled.
As a search engine crawler may keep a cached copy of this file, it may on occasion crawl pages a webmaster does not wish crawled.
Pages typically prevented from being crawled include login specific pages such as shopping carts and user-specific content such as search results from internal searches.
In March 2007, Google warned webmasters that they should prevent indexing of internal search results because those pages are considered search spam.[50] A variety of methods can increase the prominence of a webpage within the search results.
Cross linking between pages of the same website to provide more links to important pages may improve its visibility.[51] Writing content that includes frequently searched keyword phrase, so as to be relevant to a wide variety of search queries will tend to increase traffic.[51] Updating content so as to keep search engines crawling back frequently can give additional weight to a site.
Adding relevant keywords to a web page's metadata, including the title tag and meta description, will tend to improve the relevancy of a site's search listings, thus increasing traffic.
URL canonicalization of web pages accessible via multiple URLs, using the canonical link element[52] or via 301 redirects can help make sure links to different versions of the URL all count towards the page's link popularity score.
SEO techniques can be classified into two broad categories: techniques that search engine companies recommend as part of good design ("white hat"), and those techniques of which search engines do not approve ("black hat").
The search engines attempt to minimize the effect of the latter, among them spamdexing.
Industry commentators have classified these methods, and the practitioners who employ them, as either white hat SEO, or black hat SEO.[53] White hats tend to produce results that last a long time, whereas black hats anticipate that their sites may eventually be banned either temporarily or permanently once the search engines discover what they are doing.[54] An SEO technique is considered white hat if it conforms to the search engines' guidelines and involves no deception.
As the search engine guidelines[18][19][55] are not written as a series of rules or commandments, this is an important distinction to note.
White hat SEO is not just about following guidelines but is about ensuring that the content a search engine indexes and subsequently ranks is the same content a user will see.
White hat advice is generally summed up as creating content for users, not for search engines, and then making that content easily accessible to the online "spider" algorithms, rather than attempting to trick the algorithm from its intended purpose.
White hat SEO is in many ways similar to web development that promotes accessibility,[56] although the two are not identical.
Black hat SEO attempts to improve rankings in ways that are disapproved of by the search engines, or involve deception.
One black hat technique uses hidden text, either as text colored similar to the background, in an invisible div, or positioned off screen.
Another method gives a different page depending on whether the page is being requested by a human visitor or a search engine, a technique known as cloaking.
Another category sometimes used is grey hat SEO.
This is in between black hat and white hat approaches, where the methods employed avoid the site being penalized but do not act in producing the best content for users.
Grey hat SEO is entirely focused on improving search engine rankings.
Search engines may penalize sites they discover using black or grey hat methods, either by reducing their rankings or eliminating their listings from their databases altogether.
Such penalties can be applied either automatically by the search engines' algorithms, or by a manual site review.
One example was the February 2006 Google removal of both BMW Germany and Ricoh Germany for use of deceptive practices.[57] Both companies, however, quickly apologized, fixed the offending pages, and were restored to Google's search engine results page.[58] SEO is not an appropriate strategy for every website, and other Internet marketing strategies can be more effective, such as paid advertising through pay per click (PPC) campaigns, depending on the site operator's goals.
Search engine marketing (SEM) is the practice of designing, running and optimizing search engine ad campaigns.[59] Its difference from SEO is most simply depicted as the difference between paid and unpaid priority ranking in search results.
Its purpose regards prominence more so than relevance; website developers should regard SEM with the utmost importance with consideration to visibility as most navigate to the primary listings of their search.[60] A successful Internet marketing campaign may also depend upon building high quality web pages to engage and persuade, setting up analytics programs to enable site owners to measure results, and improving a site's conversion rate.[61] In November 2015, Google released a full 160 page version of its Search Quality Rating Guidelines to the public,[62] which revealed a shift in their focus towards "usefulness" and mobile search.
In recent years the mobile market has exploded, overtaking the use of desktops, as shown in by StatCounter in October 2016 where they analyzed 2.5 million websites and found that 51.3% of the pages were loaded by a mobile device.[63] Google has been one of the companies that are utilizing the popularity of mobile usage by encouraging websites to use their Google Search Console, the Mobile-Friendly Test, which allows companies to measure up their website to the search engine results and how user-friendly it is.
SEO may generate an adequate return on investment.
However, search engines are not paid for organic search traffic, their algorithms change, and there are no guarantees of continued referrals.
Due to this lack of guarantees and certainty, a business that relies heavily on search engine traffic can suffer major losses if the search engines stop sending visitors.[64] Search engines can change their algorithms, impacting a website's placement, possibly resulting in a serious loss of traffic.
According to Google's CEO, Eric Schmidt, in 2010, Google made over 500 algorithm changes – almost 1.5 per day.[65] It is considered a wise business practice for website operators to liberate themselves from dependence on search engine traffic.[66] In addition to accessibility in terms of web crawlers (addressed above), user web accessibility has become increasingly important for SEO.
Optimization techniques are highly tuned to the dominant search engines in the target market.
The search engines' market shares vary from market to market, as does competition.
In 2003, Danny Sullivan stated that Google represented about 75% of all searches.[67] In markets outside the United States, Google's share is often larger, and Google remains the dominant search engine worldwide as of 2007.[68] As of 2006, Google had an 85–90% market share in Germany.[69] While there were hundreds of SEO firms in the US at that time, there were only about five in Germany.[69] As of June 2008, the market share of Google in the UK was close to 90% according to Hitwise.[70] That market share is achieved in a number of countries.
As of 2009, there are only a few large markets where Google is not the leading search engine.
In most cases, when Google is not leading in a given market, it is lagging behind a local player.
The most notable example markets are China, Japan, South Korea, Russia and the Czech Republic where respectively Baidu, Yahoo! Japan, Naver, Yandex and Seznam are market leaders.
Successful search optimization for international markets may require professional translation of web pages, registration of a domain name with a top level domain in the target market, and web hosting that provides a local IP address.
Otherwise, the fundamental elements of search optimization are essentially the same, regardless of language.[69] On October 17, 2002, SearchKing filed suit in the United States District Court, Western District of Oklahoma, against the search engine Google.
SearchKing's claim was that Google's tactics to prevent spamdexing constituted a tortious interference with contractual relations.
On May 27, 2003, the court granted Google's motion to dismiss the complaint because SearchKing "failed to state a claim upon which relief may be granted."[71][72] In March 2006, KinderStart filed a lawsuit against Google over search engine rankings.
KinderStart's website was removed from Google's index prior to the lawsuit, and the amount of traffic to the site dropped by 70%.
On March 16, 2007, the United States District Court for the Northern District of California (San Jose Division) dismissed KinderStart's complaint without leave to amend, and partially granted Google's motion for Rule 11 sanctions against KinderStart's attorney, requiring him to pay part of Google's legal expenses.[73][74]
Multivariate landing page optimization
Conversion Rate Optimization
2DaMax Marketing | (646) 762-7511
Contact Us Today!
(646) 762-7511
954 Lexington Ave New York, NY 10021
https://sites.google.com/site/newyorkdigitalmarketingagency/