My professional interest is to take part in the formation and development stages of organizations, teams or projects working on challenging problems. I am particularly interested and experienced in work involving High Performance Computing, Artificial Intelligence, Virtual Reality and technology innovation in Humanitarian and Civil Society volunteering.
CTO & Co-Founder
Conservation Labs was founded to find pragmatic ways to you save money while better managing our natural resources. We are a team of technologists with a history of solving complex and multibillion-dollar challenges. Our first product, H2know, is a low-cost and easy to install smart water meter that provides details about water use and custom conservation recommendations.
New York City, NY; 7/2006-6/2018
Google is a leading information technology company.
Public Speaking and Fora
- Great Lakes Observing System Enterprise Architecture, Expert Advisory Panel, 2016
- Mapping Cholera, Pulitzer Center at NY Academy of Medicine, 2014
- New GTLDs, Tech Open Air Berlin, 2013
- Adapting New Technologies for Humanitarian Aid, at SXSW 2012
- Roundtable on Technology, Science and Peacebuilding, US Institute of Peace, 2011
- Technology, Tools and CONOPS for Extreme Scale Disasters, MIT/DARPA 2010
- Information Challenges for Crisis Response, at NORTHCOM Colorado Springs, 2010
- Haiti earthquake briefing call, US Executive Office of the President (White House) 1/15/2010
Search: Natural Language Query Analysis - Software Engineer, 4/2015-6/2018
The Google Assistant is a virtual assistant powered by artificial intelligence and developed by Google that is primarily available on mobile and smart home devices. My team builds the hub of the natural language processing for Google Assistant.
Cloud: Domain Registry, aka Nomulus - Software Engineer, 10/2012-3/2015
My team built Google's new gTLD Registry for 100 new Top-Level Domains and open-sourced the registry software for other TLD operators to use. Visit us at https://registry.google/
I helped build the first version of the system and the team and then open-sourced our work. I spoke about our work on the gTLD panel - Tech Open Air, Berlin 2013.
Cloud: AppEngine Search - Software Engineer, 3/2011-10/2012
- Implemented the AppEngine prospective search API for Java.
Google.org: Crisis Response - Representative, Software Engineer, 2/1/2010-3/2011
- Consolidated work on Haiti into a new "Crisis Response" group in Google.org.
- Public speaker at the United States Institute of Peace, Technology and Peace Planning Conference.
- Represented Google at multiple humanitarian, United Nations and US Government meetings to review and plan more effective disaster response.
- Mission to Haiti with international responders during December 2010 Haiti Cholera response. Public speaker at technology panel at SXSW 2012.
- Mission to Haiti as part of a 5 person team to investigate information challenges in disaster response in the wake of the Haitian Earthquake.
Google Person Finder - Project Manager, Software Engineer, Site Reliability Lead 1/13/2010-1/31/2010
In response to the Haitian Earthquake, my team of over 50 volunteers launched a missing person tracking site with live federation to CNN, NYTimes and multiple government sites that went live within 36 hours and reached a solid 1.0 within 2 weeks.
Search: Realtime Search - Software Engineer, 1/2009-1/2010
Realtime Search is a feed indexing and serving system for realtime results in web. Realtime was named one of the 10 key advances of Google search by Wired magazine in 2010.
- Received Google's OC Award, 10 December 2010.
- Launched in the Computer History Museum, December 2010.
- Production system lead.
Search: Instant Indexing - Site Reliability Engineering Lead, 9/2007-6/2008
Instant Indexing is the system which produces search results for documents recently published to the web. I was responsible for ensuring its availability, scalability and performance.
Search: Blog Search - Site Reliability Engineering Lead, 7/2007-12/2008
BlogSearch is the system which produces search results for blogs published to the web, and has a dedicated subdomain at blogsearch.google.com. I was responsible for ensuring its availability, scalability and performance.
Search: Web Search - Crawl, Index & Logging - Site Reliability Engineer, 7/2006-6/2008
My first job at Google: run the web crawl and index system that powers Google search! Amaze.
Hosted 5 internships and 200 interviews, participated in 2 semesters of college recruiting at CMU and taught 2 semesters of video game programming at the School for Global Studies in cooperation with Citizen Schools.
New York City, NY; 7/2005-7/2006
Panther was started in the Summer of 2005 to serve the growing Content Distribution market and particularly to target the space left by the acquisition of the number two provider Speedera by the main provider Akamai. Panther was later acquired by CDNetworks.
- Business Development
- Built an organization that could quickly develop and deploy a world-class CDN solution. Panther succeeded in this goal by going from first planning meeting to beta trials of a 15 city, 5 network CDN within 6 months with minimal capital expenditure. Panther closed its first round of funding in July of 2006.
- Production System
- Designed and developed a high-capacity caching HTTP/1.1 server (though, we finally opted for Squid), assisted in network capacity planning, traffic routing and load-balancing design for Panther's worldwide, 20 city, 10 network content distribution service.
New York City, NY; 5/2004-3/2005
DoubleClick was a leading Internet marketing company, serving trillions of ads per year around the world in a variety of media. DoubleClick was later acquired by Google.
- Senior Software Engineer; 1/2005-3/2005
- Team member of a small R&D group developing a next generation ad and media serving architecture for Doubleclick's extremely high traffic rates and rich service offerings.
- Software Engineer; 5/2004-1/2005
- Team member of a small R&D group reporting to the CTO/Co-Founder, developing an automated shopping search-engine/portal. Sonar was later spun-off as ShopWiki.
Designed and implemented web-page information extraction techniques, including methods based on bayesian text classification libraries and custom logical inference learners.
Designed and implemented a partially HTTP/1.0-compliant web media server capable of servicing 10-20k simultaneous connections on a commodity processor using Java NIO, thus greatly reducing the number of machines needed to support DoubleClick's ~8 billion daily ad serves. The server was zero-copy and dual-threaded, the second thread being used for arbitrary statistics collection. The server went live in 2006 on DoubleClick's main ad-serving system but was later retired.
Assisted in development of a high-capacity web crawler.
Consolidated a collection of idle pre-deployment Linux servers into a 64-CPU task-oriented processing cluster. This helped Dr. Sean Irvine factor one of the largest Cunningham numbers of 2006.
Jersey City, NJ - San Francisco, CA - Hamilton, New Zealand; 5/2001-12/2003
Reel Two produced high-performance information analytics solutions for industries challenged by increasing volumes of complex data such as Life-Sciences, Legal and Media. ReelTwo was later acquired by NetValue.
- Business Development
- Assisted in developing a strategy for our company in the Knowledge Management industry after exploring emerging information markets and their associated product and service opportunities.
- Product Development
- Director of Applications and Services
- Led design and use-case development of prototypes and successive major version releases of applications for dataset analytics and classification, document entity extraction and web-based document management portals. Presented solutions to clients and advised licensees in architecture planning.
Designed and assisted in construction of a document search solution for a major client that exceeded the price/performance capabilities of solutions available on the market. System managed tens of millions of documents using Lucene, parallel index building across a large Linux cluster and index merging on a large memory (16GB) machine.
Designed and constructed a document portal product with search, classification and multi-user curation functionality. The portal accommodated a dozen simultaneous researchers concurrently searching and modifying tens of thousands of document categorizations across a large client-defined taxonomy. Updated curations were used to reorder the documents within the full taxonomy in real-time.
Conducted natural language research in automatic term disambiguation using context windowing around target terms to model various meanings.
New York City, NY; 6/1998-4/2001
Webmind (renamed from Intelligenesis) was a global Artificial Intelligence R&D company started in 1998. I left school early to be one of the first employees there. Webmind was a fascinating place.
- Webmind Classification System
- Project Lead
- Led the New York Webmind Classification System (WCS) team, which produced the first and primary product of the company. Market analysis and product specification for 1.0-2.0 releases of WCS. Pre-sales engineering for WCS.
- Assisted in development of the core research platform, including work in distributed network datastructures and Java VM analysis.
- Managed a parallel processing project team that developed an architecture capable of distributing workloads for applications such as fractal computation and prime factorizations that yielded linear speedups on test applications.
Reston, VA; 6/1997-6/1998
Sprintlink develops and maintains Sprint's national internet backbone, one of the largest in the world.
- Backbone Operations
- Software Engineer
- Researched and prototyped a system to model the traffic flows of Sprint's national internet (IP) backbone. Designed and implemented a database of Sprint's USENET clients for real-time network administration. Helped design and implement a web front-end to this database, for both internal and client use. Acted as substitue administrator of Sprint's national USENET services for a two-week period.
J.T.Smith and Associates
Urbana, IL; 6/1996-1/1997
J. T. Smith is a computing consultancy. Assisted in corporate sales, account managment, system administration and web site design and development.
Champaign, IL 1996
Advancenet is an Internet service provider in operation since the beginning of the Internet. Assisted in sales of leased lines (56k-T1/DS1) and Internet services, web site design and development, UNIX administration, and technical support.
My First Jobs
Urbana, IL 80's and 90's
My first job was delivering newspapers for The News Gazette, which I started around age 12 and continued through early high-school. Also during this time I had various small initiatives involving lawn mowing, snow shoveling, corn detasseling, and the like. I stopped the paper route because of a move. In my new neighborhood, I served as a waiter at the Clark-Lindsey Retirement Village for 1 or 2 years, and finished up my time in high school as a dish washer The Art Mart in Downtown Urbana. After High School, I was fortunate enough to attend University.
My academic pursuits have focused on studying the fundamental concepts of Computer Science and mastering the advanced topics of Machine Learning associated with Artificial Intelligence and Natural Language Processing. I am currently working unaffiliated on an academic thesis concerning the origins and nature of intelligence.
I have no degree. I left school with a semester left so that I might work at Webmind as one of the first employees. While at Webmind I took a CS independent study at Columbia to complete my last required CS course. The rest of the credits I need to complete my degree are general electives and a science course, though I have postponed these indefinitely in preference for my professional and independent academic pursuits.
Carnegie Mellon University
Pittsburgh, PA 1994-99
School of Computer Science. Core CS curriculum completed.
New York City, NY 2000
Independent Study in Programming Languages.