DDoS Thwarted: A Win for Amazon

Posted: June 26, 2020

By: Sean Lanagan, Attorney at Law

For those concerned that Amazon has become too powerful, perhaps their recent Threat Landscape Report lends perspective. In the week of February 17, 2020, AWS’s Shield registered and successfully thwarted a DDoS attack of unprecedented scale. Amazon’s report revealed a single DDoS attack with a bit rate of 2.3 Tbps, the largest single DDoS event ever recorded. This event was 283% larger in magnitude than events from AWS’s previous quarter and dwarfed the 1.7 Tbps record, set in 2018. To help underscore the massive scale of this attack, BBC noted, 2.3 Tbps is a little under half of the rate British Telecommunications sees as a typical daytime surge across the entire United Kingdom, a sobering burden for any single company to shoulder.

At a technical level, AWS seemed unsurprised in their report. They noted this particular attack was a UDP reflection attack, the largest known form of DDoS attack. In this type of event, the attacker spoofs the victim’s IP address and solicits UDP services from a vast number of devices distributed across the Internet, the resulting replies from the solicited devices are intended to overload the victim’s network (e.g. 2.3 Tbps of replies). The attack vector used in the February attack was a CLDAP reflection, a Connection-less Lightweight Directory Access Protocol, an otherwise legitimate protocol used for accessing and modifying stored data.

Akamai, a leading enterprise-level resource for cyber security professionals, raised the flag on this form of attack in a 2017 publication. They noted that CLDAP could be maliciously leveraged for a Distributed Denial of Service due to the amplification factor of the CLDA Protocol. In their study, Akamai found that a 52-byte query could generate up to seventy times the payload in the reply to the targeted victim.

Although this was a known attack vector for AWS, prior to this event, the largest attacks AWS observed were less than 1 Tbps. This attack serves as a warning, malicious actors are capable of spinning-up traffic that rivals the traffic generated by our most advanced nations. If you are a small or medium sized business-owner, attacks like this can be thwarted through ingress volumetric filtering, however, such firewalls require constant maintenance to account for new attack vectors. In this attack, AWS has proven they are prepared for massive cyber-attacks, a scale over twice that which they are accustomed. If your cybersecurity posture is not in line with this metric, consider leveraging Amazon’s size for your benefit. While many may take issue with Amazon from an anti-trust perspective, it is irrefutable Amazon’s size precipitates their need to robustly develop cyber security tools, as long as these tools are available in the marketplace, we all stand to benefit.

** Please note: this article is not to be interpreted as your legal advice**The author of this blog is Sean Lanagan, an attorney focused in Cyber Law. For any questions, email info@cyberlawblog.com.

Doxing: You May be Vulnerable

Posted: May 18, 2020

By: Sean Lanagan, Attorney at Law

In light of our increased use of remote work stations due to COVID-19, it is particularly relevant to write this blog on the threats related to doxing. The term “dox” (short for “documents”) refers to the compiling and online publishing of another’s personally identifiable information for harassment, humiliation, or intimidation purposes. To this end, malicious actors use various tools to scour the Internet looking for sources of public data, compile it, then use the compilation to build profiles on individuals. This profile can be used in a doxing campaign to exploit targeted people for identity theft, financial crimes, and other criminal acts. The personal details on the targets help threat actors conduct attacks, to include, spear phishing, whaling, and ransomware. Targets can range from high-net-worth individuals to administrative assistants of a targeted company. In short, everyone that has, knows, or is connected to something others deem valuable, is vulnerable.

The easiest way to combat such threats is to stop the information at its source. Discretion used in social media profiles, ancestry databases, and online forums is the most obvious place to start. There are, however, public records websites that offer a convenient resource for malicious actors to gather historical data about their targets, often, free of charge. These websites use algorithms to periodically scrape public records databases, social media websites, and online forums to generate concise reports. The reports can contain prior and present home addresses, phone numbers, email addresses, names and aliases, family members, job titles, etc. These websites walk a fine line with what they can legally publish as compiled personally identifiable information. As a way to mitigate their exposure in violating U.S. laws and regulations, many of these websites offer a way for people to remove their personal information from being listed. Below are a few tutorials on how to remove your profile from some of the more notable sites that publish public records.

    • Beenverified.com -> Navigate to the opt-out link; Enter your name and state of residence; Click on your profile record; Enter your email address to send verification email; Click “Verify Opt-Out” in confirmation email.
    • Fastbackgroundcheck.com -> Navigate to the opt-out link; Check verification box; Click “Begin Removal Process”; Enter your name; Click on your profile record; Click “Remove My Record”
    • Peoplefinders.com -> Search for your profile record; Click on your record; Copy the URL of your profile record; Navigate to opt-out link, paste your profile record URL into the opt-out form; Enter your email; Click “Send Request”; Confirm opt-out in confirmation email
    • Whitepages.com -> Search for your profile on whitepages.com; Click “View Free Details”; Copy website URL; Navigate to opt-out link; paste URL into the opt-out form and click “Opt-Out”; Click “remove me” and provide a reason; provide a telephone number to receive a verification phone call; verify PIN code via telephone to complete opt-out process

The above instructions were compiled through the help of the NTIC Cyber Center. There are over 35 similar websites that may be currently publishing your data, putting you and your family at risk of cybercrime. For a more complete delisting of your personal data, please contact me at the email below.

** Please note: this article is not to be interpreted as your legal advice**The author of this blog is Sean Lanagan, an attorney focused in Cyber Law. For any questions, email info@cyberlawblog.com.

Zoom Scandal: Lessons Learned

Posted: April 20, 2020

By: Sean Lanagan, Attorney at Law

In the wake of quarantine, countless articles have been published urging concern over Zoom’s cyber security vulnerabilities. Although much of the spotlight has been cast due to COVID-19, Zoom’s problems are far-reaching, entrenched to the deepest levels of the application’s design. For those who haven’t been following the company, this is a brief timeline:

    • July 8, 2019, researcher Jonathan Leitschuh published an article exposing a vulnerability on the Mac Zoom Client that would allow any malicious website to enable the user’s camera without their permission.
    • July 11, 2019, The Electronic Privacy Information Center filed a complaint against Zoom before the FTC.
    • Zoom conducted several investor calls, published 10Q and 10K SEC filings.
    • March 26, 2020, Motherboard reported, Zoom’s iOS app sent user analytics to Facebook, even if users didn’t have a Facebook account. (e.g. the time the user opened the app, details of the user’s device, the user’s time zone, phone carrier, and unique advertising identifier).
    • Starting March 30, the FBI and the New York and Connecticut Attorneys General raised concern of Zoom-bombing, (i.e. trolls hijack non-password protected meeting URLs and share expletives).
    • March 31, several news sources reported Zoom’s software was not end-to-end encrypted, as Zoom advertised, instead it used the less secure, transport encryption.
    • Zoom’s encryption method, was later observed by Citizen Lab to include the transmission of meeting encryption keys through China data centers.
    • SpaceX banned its employees from using the video conferencing software, NASA soon followed. Google, the U.S. Senate, and other public and private organizations issued similar advisory notices to employees.
    • On April 1, an actor on the dark web posted a collection of 352 compromised Zoom accounts, which included “email addresses, passwords, meeting IDs, host keys and names, and the type of Zoom account.”[1]

Legal action

A few lawsuits have followed these lapses in security. Most recently, a class action was filed in the Northern District of California. The suit accused Zoom of promulgating materially false and misleading statements upon which investors relied to their detriment when valuing the company. The complaint cited the Zoom’s IPO documents which touted their “robust security capabilities, including end-to-end encryption, secure login…and role-based access controls.” The named Plaintiff also points to Zoom’s 10-Q, and 10-K SEC filings, and investor calls as evidence of these misleading statements. These statements either “ensure[d] security and compliance,” failed to disclose the technical implications of integration with Facebook, or used “catch-all” provisions not tailored to Zoom’s actual known risks. Although this case claims violations of SEC laws, these facts could also implicate actions sounding in negligence or state DTPA laws.

Lessons Learned

While it is arguably clear Zoom’s failure to design secure video conferencing software left many users vulnerable, it is important to note how redress is being sought. The named Plaintiff in this class does not mount a claim for cyber security vulnerabilities, instead, that the vulnerabilities were not adequately disclosed. This “insufficient notice” cause for legal action has become an emerging trend in this field. Whether through state DTPA laws, breach of warranty, or SEC violations like the above quoted complaint, the claims rest upon the statements made about the company’s security posture, rather than the adequacy of their actual security.

If you are company that collects personally identifiable information, or markets “safe” or “secure” technology solutions, review your publicized materials. Make sure your press releases don’t over-promise your deliverables, and track the “reasonable" data security practices as identified by the FTC. Without a general data privacy regulation or a more articulable duty of care applied to stewards of U.S. data, deceptive or misleading business acts or practices will continue to be the crux for cyber security litigation. Watch what you say, don’t be the next Zoom.

** Please note: this article is not to be interpreted as your legal advice**The author of this blog is Sean Lanagan, an attorney focused in Cyber Law. For any questions, email info@cyberlawblog.com.

Facebook Camera Scandal: Bug or Beta?

Posted: November 23, 2019

By: Sean Lanagan, Attorney at Law

Last week Facebook confirmed that its mobile application had a “bug” that would open the user’s camera and start recording while the user scrolled through their news feed. Unfortunately, it seems Facebook only publicly acknowledged this feature after Facebook user, Joshua Maddux, tweeted a screen recording that went viral. Apparently, earlier in the month, other users noticed this vulnerability within iPhones and tweeted their concern, only after the story spiraled however, did Facebook go on the defense.

Although, cybersecurity analysts agree with Facebook’s statement, there seems to be “no evidence of photos or videos uploaded due to this bug,” perceived to have only affected iPhone users running the latest iOS 13 software, this case has lasting implications. By now, reeling from public trust issues, Facebook should have an incident response plan that is proactive, not reactive.

Legal Implications?

Without any concrete and articulable injury reported, it is fair to conclude Facebook is in the clear, for now. If someone comes forward with evidence that nonconsensual videos or photos were taken and disseminated off their device, due to this “bug,” Facebook could suffer tort claims ranging from negligence to intrusion upon seclusion. Although intent would be a question of fact, Facebook would again have to appeal to public trust to maintain their innocence. A scanty thought.


Public trust will distinguish whether a mistake is perceived as a bug or a beta test. Without public trust, public opinion will not exercise the benefit of the doubt, and an accusation will be an indictment. If your business has a social media presence, learn from Facebook, hold your business accountable, before others do. If you are the first to break the bad news, you can frame the narrative.

** Please note: this article is not to be interpreted as your legal advice**The author of this blog is Sean Lanagan, an attorney focused in Cyber Law. For any questions, email info@cyberlawblog.com.

Data Privacy Through A Fifth Amendment Lens

Posted: November 02, 2019

By: Sean Lanagan, Attorney at Law

If a person approaches a police officer and confesses to murder, can that statement be used against them in their prosecution? Yes, of course. If a police officer approaches a person and asks them if they committed a murder, can their statement be used against them in their prosecution? It depends. Both statements are freely given by the suspect, why the disparity?

The Supreme Court has acknowledged an adversarial relationship between officers and suspects and has identified the Miranda Warnings as a prophylactic rule to protect suspects’ rights. In Miranda, the Court determined that protecting an individual from being a witness against himself is of such vital importance, that officers are required to instruct them accordingly. Obviously, this is not without limitation. Officers are not required to save suspects from themselves, so where is this line drawn? Elicitation.

In Rhode Island v. Innis, the Supreme Court held that when a suspect is in custody, interrogation sufficient to give rise to Miranda Warnings is “any act, verbal or non-verbal, by the police that they know or should know is reasonably likely to elicit an incriminating response from the suspect.”

So, what does this mean for data privacy? Four Petabytes of data are generated on Facebook per day, which is equivalent to the storage required for a 24/7 HD video running for roughly thirteen years. All of that–freely given–content is a matter of record, and could be used against the contributor at any time. Obviously, social media mediums are not required to save the contributors from themselves, but what about elicited content?

Cambridge Analytica has now become well-known for eliciting and then extrapolating Facebook user content to push targeted political advertisements. Is this wrong? Everyone can agree interfering with the democratic electoral process should be prohibited, but to what degree is targeted marketing interference?

I argue, through the purview of the Fifth Amendment, social media mediums are not inherently adversarial relationships, but they can be. Using someone’s freely given statements to better understand their mental state would not be prohibited under the Fifth Amendment. To use those statements “against them” makes the relationship adversarial. From a social engineering standpoint, an adversarial relationship could be determined through intervening in a person’s free will. The equivalent in tort law would a superseding event that breaks the chain of causation. Under tort doctrine, superseding intervention is a high threshold to meet, a threshold, I would argue, is at the conceptual core of what the Court intended to prevent through Miranda, a circumstantial disparity in power that gives rise to a statement against their own interest, that otherwise would not have occurred.

Analogically, Cambridge Analytica crossed this line when they elicited information from Facebook users through personality surveys, from which they used big data to tailor their marketing in an effort to change the political views of those persons targeted. The Supreme Court drew the line at elicitation. If an officer intends to elicit a confession, they must first give the suspect a warning. Is it reasonable to ask the same from our digital record keepers?

** Please note: this article is not to be interpreted as your legal advice**The author of this blog is Sean Lanagan, an attorney focused in Cyber Law. For any questions, email info@cyberlawblog.com.

Does AI Attribute Agency? (Part II)

Posted: October 11, 2019

By: Sean Lanagan, Attorney at Law

In short, the answer appears to be no. According to the Restatement Third of Agency, an agent must be a person, which is defined as (a) an individual; (b) an organization or association that has legal capacity to possess rights and incur obligations; (c) a government, political subdivision, or instrumentality or entity created by a government; or (d) any other entity that has legal capacity to possess rights and incur obligations. In other words, to confer agency to an autonomous machine/program would implicitly require the granting of legal rights and obligations to machines/programs. Although some machines are more protected than others under the law (e.g. Computer Fraud and Abuse Act), machines themselves and the programs run thereon, have yet to receive legal rights independent from the host they serve.

Why is this important?

Agency attributes vicarious liability under tort law. In other words, an individual can be held culpable for the torts committed by another, if that other “person” is found to be their agent. This is particularly important as it relates to AI utilized to cause harm. If AI does cause damages, who pays? If AI is an agent, the principal pays. If AI is not treated as an agent- which appears to be the case under modern legal interpretation- AI would be classified as a contractor, responsible for its own torts. This is a legal dilemma that has yet to be solved by legislation. While it is true agency could be attributed to AI, if created by the government, this potentially leaves plenty of room for malicious AI to be created by the private sector, resulting in questionable recourse.

Note, the Restatement is the codification of case law produced by the American Law Institute, it is not binding authority, but acts as persuasive guidance in interpreting modern legal issues.

** Please note: this article is not to be interpreted as your legal advice**The author of this blog is Sean Lanagan, an attorney focused in Cyber Law. For any questions, email info@cyberlawblog.com.

Does The Fourth Amendment Protect Us from AI? (Part I)

Posted: September 21, 2019

By: Sean Lanagan, Attorney at Law

Fourth Amendment Protections

The Fourth Amendment expressly grants people “the right to be secure in their persons, houses, papers, and effects” from the government’s unreasonable search or seizure. The courts’ interpretation of this amendment has formed the basis for what we understand to be “a reasonable expectation of privacy.” As Justice Harlan concurred in Katz, this standard is met if the person has “exhibited an actual (subjective) expectation of privacy, and second that the expectation be one that society is prepared to recognize as ‘reasonable.’” Katz v. United States, 389 U.S. 347, 361 (1967). This two-fold requirement to initiate Fourth Amendment protection is both the product of, and the catalyst to a substantial amount of case law. However, without belabored study, the question of whether a person has a reasonable expectation of privacy over their electronic data, accessed by AI, can be answered in the affirmative. In 2014, the Supreme Court held that to access data stored on a suspect’s phone requires a warrant, and barring exigent circumstances, not obtainable through a search-incident-to-arrest. Riley v. California, 573 U.S. 373, 387 (2014). The extent of this data protection was most recently reviewed by the Supreme Court in U.S. v. Carpenter. Here, the Court held that “[m]apping a cell phone’s location over the course of 127 days” without first obtaining a warrant through probable cause, “contravenes that expectation” of privacy, despite those records being held by data service providers. Carpenter v. United States, 138 S. Ct. 2206 (2018). Several lower courts have also compared electronic data to “papers” and “effects” as found in the Constitution. Because electronic data implicates Fourth Amendment protection, an analysis is required to determine whether AI technology used to search a device would intrude upon this reasonable expectation of privacy.

Legal Framework for Technology-based Searches

Technology-based searches make up a narrow category in case law, which makes it difficult to discern how the Supreme Court would interpret AI. However, a framework for technology-based searches can be gleaned from Kyllo v. United States, 533 U.S. 27, 34 (2001). In this case, officers used a thermal imaging device to generate probable cause to search a residence suspected of growing marijuana. The Supreme Court held that surveillance is a “search” and is presumptively unreasonable without a warrant when the government uses a device that is not in general public use to explore details of the home that would previously have been unknowable without physical intrusion. While this case is distinguishable from AI utilized on computers (i.e. effects), versus that of homes, where more privacy rights are associated, the notion of technology being available for public use is an important threshold for an unwarranted technology-based search. Most recently, the Court in Carpenter v. United States, distinguished “a cell phone [as]—almost a ‘feature of human anatomy,’” noting studies of how ingrained phones have become to our physical location. Supra, citing Riley, 189 L. Ed. 2d 430, 441. This anatomical outlook on technology is at least facially indicative of the Court’s intent to bolster people’s privacy over their technology. These cases taken in totality provide insight as to how the Supreme Court may treat an AI algorithm performing a search:

1) AI technology, not generally available to the public, used to search the contents of a home likely requires a warrant;

2) AI searches of cell phones (and potentially other mobile electronic devices) should pass the same level of scrutiny required to search an individual’s person (e.g. saliva, urine, or blood, depending on the physical intrusion of the AI search algorithm).


In summary, the law has taken shape to objectively provide individuals with privacy over their privately held data. This crux of this provision, as discussed in prior posts, is relegated by an individual’s subjective manifestation of privacy over their data. If the government was successful in arguing the claimant failed to exhibit measures to manifest their subjective intent to safeguard data argued to be private, the claimant would likely not have constitutional protections. This is particularly important when considering the ability of AI algorithms to extrapolate any and all data in the ether, whether relating to an individual’s person (e.g. facial recognition), or effects (e.g. computer). This said, although it is reasonable to conclude AI can conduct a search protected under the Fourth Amendment, the Constitution only protects persons from the government. The question remains whether the Supreme Court would attribute agency to an autonomous AI algorithm that performs the search. This question will be analyzed in further detail in Part II of this posting.

** Please note: this article is not to be interpreted as your legal advice**The author of this blog is Sean Lanagan, an attorney focused in Cyber Law. For any questions, email info@cyberlawblog.com.

Are Messages Sent On An Unecrypted Network Private?

Posted: December 07, 2018

By: Sean Lanagan, Attorney at Law

Unencrypted network defined.

For the purposes of this blog, it is sufficient to associate an unencrypted network with public Wi-Fi or a “free” internet connection at your local café that does not require a password. This is an important distinction because, as discussed later, whether or not electronic communications are “readily accessible to the general public” is a matter central to the issuance of privacy rights. To this issue of privacy, I attempt to add clarity in defining how the courts have interpreted the applicable parameters of various legislation and how I expect the Supreme Court to thread this needle.

Wiretap Act and the Electronic Communications Privacy Act (ECPA)

The Wiretap Act and its amendments under the ECPA impose civil and criminal penalties against any person and/or entity that intentionally intercepts, endeavors to intercept or procures any other person to intercept, any wire, oral, or electronic communication. Specifically, “intercept” is defined as the “acquisition of the contents of any wire, electronic, or oral communication through the use of any electronic, mechanical, or other device.” “[W]ire communication” is a transmission made in whole or in part through the use of facilities for the transmission of communications by the aid of wire, cable, or other like connection, affecting interstate or foreign commerce. “[E]lectronic communication” is the transfer of data in whole or in part by wire, radio, electromagnetic, photoelectronic, or photooptical system that effects interstate or foreign commerce that is not otherwise wire or oral communication.

Notably, this act varies from constitutional privacy doctrine in that the Constitution protects individuals from government privacy violations, where this act protects from intrusions to privacy by non-state actors. On its face, this act seems to protect the privacy of vast amounts of electronic communication. However, the exceptions in this act make it very complicated to interpret. Particularly relevant for debate is §2511(2)(g)(i) which provides, it shall not be unlawful “to intercept or access an electronic communication made through an electronic communication system that is configured so that such electronic communication is readily accessible to the general public.” The act defines “readily accessible to the general public” with respect to radio communication, as communication that is “not-- (A) scrambled or encrypted . . .” Federal courts have wrestled with the legislative intent behind this provision.

To further complicate the subject, the rise of free “sniffer” tools like Wireshark, which enable people using the same network connection to capture data packets transferred through the wireless network, raise a serious question as to what is “readily accessible to the general public.” At the time of this writing, the Ninth Circuit Court of Appeals is the highest federal court in the country to issue a recent ruling on this issue.

In Joffe v. Google, Inc., 746 F.3d 920 (9th Cir. 2013), Plaintiffs brought a consolidated class action against Google under the Wiretap Act for obtaining nearly 600 gigabytes of transmission data from unencrypted home and business Wi-Fi networks by Google’s Street View cars driving on public roads. The gathered information included the network’s name (SSID), the MAC address of the router, signal strength, and whether the network was encrypted. However, for unencrypted networks, the cars also captured payload data, data “transmitted by a device connected to [the] Wi-Fi network, such as personal emails, usernames, passwords, videos, and documents. The data in dispute was collected from over 30 countries. In this case, the appellate court affirmed the lower court’s rejection of Google’s argument. Google asserted that data transmitted through a Wi-Fi network is a “radio communication” that is exempt under the act as “readily accessible to the general public.” Rejecting this notion, the Ninth Circuit held “’radio communication [sic] excludes payload data transmitted over a Wi-Fi network.” The court reasoned that the ordinary meaning of radio communication cannot mean anything transmitted over a radio frequency, as Google contended, as that would abridge the statute to include “television broadcasts, Bluetooth devices, cordless and cellular phones, garage door openers. . .” Thinking Google’s interpretation would unduly expand the scope of the act’s exception, the Circuit interpreted the phrase “radio communication” to be confined not to include payload data transmitted over a Wi-Fi network.

Google’s position was not unfounded, a year before the Joffe holding, the Northern District of Illinois held that “in light of the ease of ‘sniffing’ Wi-Fi networks, [sic] communications sent on an unencrypted Wi-Fi network are readily accessible to the general public.” In re Innovatio IP Ventures, LLC Patent Litig., 886 F. Supp. 2d 888 (N.D. Ill. 2012). The court further stated, the “public’s lack of awareness of the ease with which unencrypted Wi-Fi communications can be intercepted by a third-party is [sic] irrelevant in determining whether those communications are ‘readily accessible to the general public,’” and urged Congress to modify the Wiretap Act. It is important to note however, the court made this ruling under a distinguishable set of circumstances. Here, Innovatio IP Ventures modified the Wireshark software to overwrite the data payload before the results were provided to the user. To this end, Innovatio only obtained the header information of the data packets, i.e the source address, destination address, packet length, and checksum data, revealing network configuration information. This header information is akin to pen register data, protected under another statute.

Pen Register and Trap and Trace Devices Act

The Pen Register and Trap and Trace Act makes it a crime for a person to “install or use a pen register or trap and trace device.” 18 U.S.C. §3121(a). These devices capture, record, or decode dialing, routing, addressing, or signaling information transmitted by an instrument or facility from which a wire or electronic communication is transmitted, information that does not include the contents of any communication. 18 U.S.C. § 3127(3). Unfortunately, there is not yet any case law that analyze the substantive legitimacy of comparing this statute's provisions with data packets. The district court in In Re Innovatio briefly discussed the relevancy, but due to a lack of precedent and brevity of the argument raised by the defendant, the court declined to apply the statute. However, it is noteworthy that in the court’s brief discussion, they stated that to apply the act in this case would treat every device connected to the network as a trap and trace device, as “all Wi-Fi devices on a network necessarily receive addressing information to determine if a data packet is addressed to them.”

Insight to the Supreme Court.

Unfortunately, the opportunity to put this issue to rest was avoided when the Supreme Court denied Google's writ of certiorari in Joffe v. Google, Inc. However, in the Supreme Court’s most recent case on this subject, Carpenter v. U.S., 138 S. Ct. 2206, (2018), the Court reaffirmed their precedent set in Smith v. Maryland regarding the acceptable use of pen registers by the U.S. government without a warrant. In their review of the third-party doctrine, which identifies “an individual has a reduced expectation of privacy in information knowingly shared with another,” the Court recounts that they “doubt[ed] that people in general entertain any actual expectation of privacy in the numbers they dial.” Recall from the previous blog post (Is Targeted Marketing An Acceptable Infringement To Our Right To Privacy), the right to privacy requires both a subjective expectation and objective public support for a right to privacy to exist under the Fourth Amendment. This said, it seems the Court would likely hold a similar viewpoint of individual privacy rights over data packet headers on an unencrypted network.


At the time of this writing, it is fair to conclude that people do not have privacy rights over the data packet headers sent on an unencrypted network. As for the content, to the extent Joffe can shine a light on this matter, it seems the content of your data packets are still protected from a person running Wireshark in the internet café (located in the Ninth Circuit). Note, this analysis does not include someone looking over your shoulder to see what you’re typing, nor an individual saving information from your device non-contemporaneous to the data transmission, as this is outside the scope of the Wiretap Act. Additionally, it is important to consider, there are state law provisions, that can be more protective than federal law, see Massachusetts. But knowing what you know now about the ease of access the general public has to free “sniffer” software like Wireshark, a strong argument can be made, you no longer are entitled to the subjective belief that your communication on an unencrypted network is private. Sorry.

** Please note: this article is not to be interpreted as your legal advice**The author of this blog is Sean Lanagan, an attorney focused in Cyber Law. For any questions, email info@cyberlawblog.com.

Should The Private Sector Warehouse Data For The Intelligence Community?

Posted: November 28, 2018

By: Sean Lanagan, Attorney at Law

Privatized cloud services for the U.S. Government.

Although the above video is long, it accurately describes the DoD’s position as to their need for a cloud-based system that aggregates agency data for utilization across departments. Coined the Joint Enterprise Defense Infrastructure (JEDI) Program, the DoD has issued a request for proposal (RFP) from U.S. technology firms to deliver this solution and help bridge the gap in defense technological infrastructure. The budget is up to $10 billion, for services spanning the next ten years.

This is not the first time the Federal government has sought private technology firms for warehousing classified data. In 2014, Amazon secured a $600 million cloud hosting bid for the CIA. The intelligence community officials at the time stated that Amazon will be installing their system behind the CIA's firewall, and that this was a measure to keep up with advancement in technology. The idea being, every time Amazon Web Service (AWS) offers an update, Amazon will update the CIA's cloud. Then Director James Clapper, seemed to think that the need to be in sync with the technology curve outweighed the risk of a private sector service offering. Whether Clapper was correct is unknown. In 2015 Amazon reported to the U.S. government that their video streaming hardware had been hacked by the Chinese. In a process known as interdiction, Chinese spies managed to place a chip (small enough to fit on the tip of a pencil) on the server’s motherboard as it was assembled by one of Amazon’s subcontractors. Apparently, the chip had the ability to alter live drone footage fed to the CIA through Amazon’s AWS cloud platform. It is not apparent, which if any operations were affected.

As for the DoD’s recent JEDI RFP, the Pentagon has faced mounting criticism from the technology community as to the bidding process, the structure of the contract, and the substance of the technology requested. This post attempts to raise some concerns pertinent to the private sector providing cloud-based solutions, as requested by government agencies, and formulate an objective conclusion as to this reasonableness of this prospect.

Too big to fail?

As with any private sector service provider the government is operationally dependent upon, there is a risk that the company will be too “big to fail.” Wall Street firms achieved this status in 2008 in response to the crash of the seemingly stable real estate market. The concern, the technology sector is more volatile than the real estate sector (based on the 3 year monthly Betas of the SPDR ETFs XLRE and XLK). If the U.S. Defense Department becomes dependent on a technology company’s services, it needs to prepare to underwrite the risk if the company failing, restructuring, and/or being acquired. The DoD has indicated that this proposed JEDI contract is initially only for two years, with two four year extensions. However, critics claim that the recipient of the first term will almost certainly prevail over the next two terms due to onboarding costs.


Antitrust law in the United States is set forth in 1890 by the Sherman Antitrust Act, which imposed civil and criminal penalties for those guilty of “conspiracy, in the restraint of trade or commerce among the several States, or with foreign nations.” The Clayton Act expanded anti-trust law to prohibit price fixing, monopolization by acquisition or control, and selective bidding.

Amazon, IBM, Oracle, Microsoft, and Google are the preeminent competitors over this contract. However, under the RFP, only one cloud service provider will receive the contract, instead of a multi-vendor solution where each company can supply the best of their individual components. Oracle, Microsoft, and Google have all lobbied for this multi-vendor approach. The DoD has argued that the one vendor solution offers more streamlined onboarding, implementation, and upkeep. However, this position seems to have little persuasive effect within the technology community. Partly because under DoD’s RFP the only company that presently has the capabilities to offer the cloud service as requested, is Amazon’s AWS platform. The Technology community is not alone in their concern. In an open letter to the DoD’s Inspector General Republican Representatives Tom Cole and Steve Womack expressed concern that the DoD violated the Federal Acquisition Regulations and DoD Ethics Policy. In this letter, the Congressmen suggested that unnecessary “gating” restrictions tailored the proposed contract to one specific contractor. The DoD has since confirmed it is “reviewing the request” made by the Congressmen. It is presently unclear where the IG will fall on this RFP's alleged impropriety.

Adding to the concern, Amazon arguably dominates the cloud infrastructure market. Based on a 2017 study by Gartner Inc., Amazon owns 51.8 percent of the cloud infrastructure market, followed by Microsoft Azure with 13.3 percent, Alibaba with 4.6 percent, and Google Cloud Platform at 3.3 percent. Although, DoD officials have indicated that the JEDI contract will cover less than one-fifth of DoD’s overall cloud requirements, this serves little consolation to the antitrust issue. As to whether such marketshare rises to the level of monopolization by control under the Clayton Act, case law is helpful. Supreme Court precedent suggests “domination of the market [is] sufficient in itself to support the inference that competition had been or probably would be lessened.” Standard Oil v. U.S., 337 U.S. 293, 301 (1949). Although one of the prongs for an anti-trust action is met, the decision to pursue an Anti-Trust action rests with the Attorney General. With the recent departure of Jeff Sessions, it is hard to do more than speculate as to the likelihood of a future enforcement action.

Regardless the probability of such action being taken against Amazon, the U.S. government needs to seriously consider the ramifications of facilitating such anti-competitive practices. If the U.S. government intends to continue enforcing Clayton Act violations, the Defense Department needs ensure there is no pretense of a double standard or improprieties in the bidding process.

Lack of continuity in “principles.”

Microsoft: In an open letter dated October 12, Microsoft employees voiced ethical concerns as to the utilization of their work for “’a more lethal’ military force.” They voiced frustration with Microsoft’s existing cloud contract with Immigration and Customs Enforcement (ICE), claiming Microsoft “provides ‘mission critical’ Azure cloud computing services that have enabled ICE to enact violence and terror on families at the border within the United States.” The article urges Microsoft’s A.I. ethics committee to play a more active role in reviewing government contracts. At the end of October Brad Smith, Microsoft President and Chief Legal officer, publicly replied to the voiced concerns. Smith defended the company’s commitment to the U.S. military, stating “we believe in the strong defense of the United States and want the people who defend it to have access to the nation’s best technology” and suggested mobility within the organization should employees feel uncomfortable with any projects Microsoft pursues.

Google: Google has dropped out of the bid amidst employee protest, Google’s infrastructure not meeting the standards requested by the bid, and Google claiming they could not be assured the contract’s obligations align with Google’s AI Principles. Under these AI Principles, Google commits not to design or deploy Artificial Intelligence in areas “likely to cause overall harm;” for “[w]eapons” or technology designed to “facilitate injury to people;” or “whose purpose contravenes widely accepted principles of international law and human rights.” Notably, these AI Principles were also referenced by Google in their explanation not to renew their contract with the Pentagon’s artificial intelligence program. For these reasons, Google has dropped out of the bidding process.

This lack of continuity between private sector “principles,” and the purpose of the federal government to provide for the common defense are at odds. Although Microsoft leadership is committed to help the military, it is clear many in their workforce are not. Moreover, the classified nature of these projects raises an overarching concern of loyalty. With activists like Edward Snowden still ripe within our memory, it begs the question whether outsourcing to the private sector is the government’s best option.


I am not opposed to private sector developing cloud infrastructure for the intelligence community, but the solutions proposed are concerning. I have obvious concerns with the JEDI project both from a private sector and administrative acquisition standpoint. The DoD’s IG and the Attorney General will largely dictate the equitable resolution for these concerns. However, the bigger issue is whether the private sector can best broker the DoD’s solution, in the capacity sought, under traditional contract services. The private sector is no doubt the best equipped to bridge the gap in technological infrastructure. However, it seems to me the government is looking for a dynamic solution under the static framework of a longterm single vendor contract.

In a perfect world, the DoD should be able to bridge the gap of the technology curve without being locked into any particular company’s solutions. A structure that leverages competition in the private sector tailored to the specific needs of the government. This can be achieved through cooperatives. An open source application ecosystem where agencies can post object oriented tasks and the private sector can compete to proffer the best solution for an agreed-upon award. These would be compartmentalized micro-contracts. The agency delegates the systems engineering, but is responsible for the implementation in conformance with operational security. This proposal mitigates reliance on the solutions of any one company, the potential facilitation of anti-competitive practices, and any variance of principles-based work ethic. A similar technique has been used to develop weapon systems, and is not new to the DoD. A logical conclusion can be made that the DoD’s interest in a single vendor is for speed of development. As a commentator well-read in the unclassified material, I see more downside than up in pursuing the DoD’s single vendor solution their cloud infrastructure needs.

Under the solutions already presented, I agree with the multi-vendor approach, but for this to be efficient and effective, the government needs to lower the transaction costs associated with onboarding new technology. This need is at the heart of my proposal, and is the root cause for the government’s gap in technological development. Outsourcing is not the solution, the solution is through the integration of innovation.

** Please note: this article is not to be interpreted as your legal advice**The author of this blog is Sean Lanagan, an attorney focused in Cyber Law. For any questions, email info@cyberlawblog.com.

Is Targeted Marketing An Acceptable Infringement To Our Right To Privacy?

Posted: November 18, 2018

By: Sean Lanagan, Attorney at Law

What is targeted marketing?

Targeted marketing is a method of advertising that attempts to leverage big data (i.e. an amalgamation of search history, consumer preferences, and other demographic data) to appeal to the target’s interests in purchasing goods or services. Facebook is often used as a notable example of how such information can be acquired. However, this should not give users a false sense of security on other sites. Big data can be acquired on any site through which your interaction can give insight as to who you are, this includes Google, Spotify, YouTube, Twitter… and the list goes on. Despite privacy considerations, big data can have many advantages, we live in a consumer centric society. Who doesn’t want to find their perfect product? But to define an unacceptable infringement to privacy, one must first understand the origin to privacy rights.

Background on the right to privacy.

The right to privacy is not expressly provided within the U.S. Constitution. Instead, privacy is read through the First, Third, Fourth, Fifth, and Ninth Amendments (Griswold v. Connecticut, 381 U.S. 479). The context of the Amendments is important to ascertain the parameters of this right. The First Amendment establishes privacy rights through our freedom of speech, religion, and assembly. The Third Amendment prohibits the quartering of troops in “any house.” The Fourth Amendment ensures privacy against unreasonable searches and seizures, to our “persons, houses, papers, and effects.” The Fifth Amendment allows citizens to create a “zone of privacy” in which the government may not require the accused to testify against himself. Lastly, the Ninth Amendment protects privacy, providing that none of our rights under the Constitution shall be interpreted to deny or diminish other rights retained by the people.

The Fourth Amendment is arguably the most expansive vestige of privacy under the Constitution. Under the Fourth Amendment standard, for a search to be protected, the person must have exhibited an actual, subjective expectation of privacy; and the expectation must be one that society is prepared to recognize as reasonable. The meeting of these two-prongs is the basis by which a search warrant is required, and as such, is the threshold for our privacy over our person, houses, papers, and effects. There is voluminous case law that threads this needle for each of the above amendments, but for the purposes of this blog, it is enough to recognize that our right to privacy is not set by what you or I feel is private, but rather how society recognizes privacy.

Regulating targeted marketing.

Although there is no officially designated agency charged with online privacy enforcement, the Federal Trade Commission (FTC) has assumed this capacity through many of their advisory opinions. In their article, "Protecting Consumer Privacy in an Era of Rapid Change: Recommendations for Businesses and Policymakers," the FTC's resounding sentiment was transparency. As long as the consumer data is collected in a transparent manner, the online service providers are entitled to use and collect user information. Generally, consent must be obtained by the user for any personally identifiable information to be sent to third-parties. However, this consent can often be a condition for services, such that refusal prohibits subscription. The CAN-SPAM Act, VPPA, and COPPA are all statutes that aim to protect consumer privacy. There will be more discussion on these statutes in future blogs.

My conclusions:

As to targeted marketing, this is not new, before the internet people suffered from unsolicited targeted advertising, from being stopped in the street for a newspaper, or given a flyer outside a play. Although this advertising was likely just as annoying, it was also less invasive than what we face now. The reason being, the person selling newspapers did not follow you around, gather information about your schedule, wait for the perfect time to approach you, and upon approach, tell you about all of your friends who had bought newspapers from him. Why did they not do this back then? This is stalking. Not only is stalking illegal, it is highly unethical.

This said, I agree with the FTC's perspective, stalking is illegal and unethical unless the other party gives their consent. Extrapolated to the internet, marketing-based data collection is acceptable as long as those that are targeted are given notice of the collector's intentions and the opportunity to opt out. In my opinion, too few online platforms follow this standard, and those that do not are unethical, and should face enforcement actions by the FTC.

As to our right to privacy, I believe larger societal issues are at play. As society becomes more open with what is shared on the Internet, big data builds personnel files, and metrics on societal trends in information. With time, this data become more empirically indicative of what society views as private information. My concern is that, if used as evidence in court, these metrics would have a smoothing effect on individual privacy rights. The extremely private and the extremely open would factor out, and the average of the two will be found as society’s “reasonable” expectation of privacy. As someone who strongly values the freedom privacy entails, I find the potential use of big data to discern the objective view of privacy concerning, and I encourage others to be mindful of what they share online. In determining your view of privacy, it may be helpful to know some of the most controversial cases turn on a strict view of privacy. See abortion (Roe v. Wade, 410 U.S. 113); See also, homosexual sodomy (Lawrence v. Texas, 539 U.S. 558).

As to what degree targeted marketing is acceptable is best answered with another question. Are telemarketers and spam invasive to privacy because they are not solicited, or because they are soliciting the wrong content at the wrong time or place? If the answer is the former, and marketing is to be solicited, the result would undermine competitive and capitalistic forces. If the answer is the latter, targeted marketing is not a question of acceptability, but efficiency, and will be as inefficient as the user remains private. Ultimately, whether targeted marketing is an acceptable infringement to our right to privacy depends (not on what I think, but) on whether our society is more incentivized by the ease of consumerism, or the retention of privacy rights. For all of our sake, let’s hope it’s the latter.

** Please note: this article is not to be interpreted as your legal advice**The author of this blog is Sean Lanagan, an attorney focused in Cyber Law. For any questions, email info@cyberlawblog.com.

Is Mark Cuban Correct? Are Software Patents Worthless?

Posted: November 16, 2018

By: Sean Lanagan, Attorney at Law

Billionaire venture capitalist, Mark Cuban is an outspoken critic of patents. Mr. Cuban argues that patents are intrinsically worthless, that it is an entrepreneur who adds value, bringing the invention to market. As a notable tech-entrepreneur, Mr. Cuban is not alone in this sentiment. Ronald Mann, then Co-Director for the Center for Law, Business & Economics at the University of Texas School of Law, published a law review article scrutinizing the commercial benefit of software patents for small, venture-backed, firms after interviewing roughly sixty managers, investors, and attorneys on the issue.

What others think:

In a 2008 study, Professor Sichelman, of The Berkely Center for Law and Technology, refuted Mann’s conclusion with the empirical data of software companies. Sichelman conceded to the notion that patents are intrinsically worthless, but asserted a position of extrinsic value. In his study, the Professor concluded, “entrepreneurial firms of all ages, sizes, and technologies appear to engage in the so-called ‘strategic’ use of patents,” relying on them heavily to raise financing, help in acquisitions or initial public offerings, and augment their image.

It is worth noting, in 2010 this survey was re-examined in a three-part series. The minds at Berkeley found that for a majority of early-stage software companies who did not file, were deterred by the “high cost of patenting and enforcing their patents,” and found executives were less than “slight[ly]” incentivized by patents to innovate. Although the authors were left curious as to the divergence in perspectives between entrepreneurs in the 2008 survey, and their executives, I find this tension fitting.

My Conclusions:

It is important to distinguish between executives, entrepreneurs, and inventors. From an incentives perspective, it is understandable for executives to view patents as a depreciable asset, or cost of goods sold, and less a catalyst to innovation. Whereas an entrepreneur is more incentivized by attaining bargaining power. I believe Mark Cuban is correct in asserting that the majority of economic value is not in the invention itself but in the marketing thereof. Any sketch, plan, or device is not inherently useful until it is implemented in the market and made widely available at an affordable cost. However, it is important to note, Mark is speaking from the perspective of an entrepreneur. While it is possible that the innovator is the entrepreneur, this is not always the case. Some are skilled at creation, and others at the monetization. If Mark Cuban's perspective was to be embraced, and the market turned away from protecting intellectual property rights, the rights of inventors that are not entrepreneurs would be infringed upon.

While I understand Mr. Cuban’s frustration that the acquisition of intellectual property as a sword, instead of a shield, stifles the modestly funded entrepreneur with a good business plan, this frustration is systematic, not systemic.

The renowned economist and Nobel laureate, Ronald Coase wrote his acclaimed paper on a similar concept. Coase’s thesis, clearly identified property rights and the reduction of transaction costs mitigate negative externalities in the market. Extrapolating Coase, the systematic objection to intellectual property rights are a function of high transaction costs associated therewith, not as Mark Cuban claims, the property rights themselves.

If a conclusion can be reached, it is that the patent process needs to be reformed to lower transaction costs. Patents are the brokering of monopolistic forces. As such, small business patents are necessary to prevent large corporations from squatting on intellectual property, as seen with IBM in the early Microsoft days. To this end, the federal and state governments should do everything in their ability.

What has been done?

Since the time these Berkeley surveys were published, the FTC has proffered reports that were instrumental in Congress passing the America Invents Act in 2011. The FTC has also taken a more active role in enforcing anti-competitive patent trolling, publishing their landmark report in 2016, and anti-trust guidelines in 2017. But with recent law firm reports stating there is over a 24 month wait period from patent application to approval, transaction costs in the U.S. are still extremely high for a modestly funded software start-up. Much work still needs to be done.

** Please note: this article is not to be interpreted as your legal advice**The author of this blog is Sean Lanagan, an attorney focused in Cyber Law. For any questions, email info@cyberlawblog.com.