Reflection on Spring Semester 2017 for BFSLA
Johanan Ottensooser, the 2016 BFSLA Scholarship recipient, reflects on the spring semester 2017 of his LLM in Law, Technology and Entrepreneurship at Cornell Tech in New York City.
These past 6 months were exceptionally busy ones. But mainly, they were exciting. In this reflection, I’ll discuss what I studied, in law and computer science and what I built.
- This semester, I took the following legal classes:
- Technology Transactions (II);
- High Growth Corporate Transactions (II);
- Employment Law;
- Internet, Privacy and Cybersecurity; and
- Law Team.
As discussed in my previous update, the faculty that Cornell Tech has available to teach the substantive law courses in the curriculum is formidable. Having covered the basics last semester, this semester focussed on more intricate issues: navigating distressed financing, bankruptcy, IP and employment challenges, etc. Internet Law covered everything from freedom of speech in an
Having covered the basics last semester, this semester focussed on more intricate issues: navigating distressed financing, bankruptcy, IP and employment challenges, etc. Internet Law covered everything from freedom of speech in an internet context, wiretapping, hacking and computer misuse, data and data privacy and more. Employment law focussed on building a diverse and compliant high growth organisation, with a focus on ensuring appropriate protections to the startup relating to intellectual property. In Law Team this semester, my partner Max and myself worked with a top tier US technology practice to advise four student developed startups that spun out of Cornell Tech. Among them, a
In Law Team this semester, my partner Max and myself worked with a top tier US technology practice to advise four student developed startups that spun out of Cornell Tech. Among them, a video based mobile gaming company, an insurance company for gig economy workers and a design research tool. We worked with them on intellectual property protection, navigating regulation, creating inter-founder agreements and finding financing to grow. We worked with these startups in a bit of a novel way—we didn’t apply our legal analysis externally looking in, but, rather, helping them design products in a compliant manner.
This semester I was also able to take a few computer science subjects, studying algorithm design and blockchain and cryptocurrencies.
In algorithm design, we studied linear optimization models for solving problems proscriptively and basic monte carlo simulations for estimating solutions descriptively. We learned the strengths and weaknesses of proscriptive optimization and descriptively estimation, what information was necessary to run such models and how best to collect and organise data as an input for these models. My final product for this class was creating a variant of the travelling salesperson problem where the object was to maximise profit with increasing marginal costs for each stop. These skills are broadly applicable in strategic decision-making and have been an essential part of our financial system, project planning and logistics for years—as a lawyer, it was fascinating to learn a little bit more about such a foundational technology.
In blockchain and cryptocurrencies, I had the pleasure to learn from Professor Ari Juels, a professor who has helped shape the cryptocurrency and smart contract ecosystem, co-director of IC3. In this class, we learnt everything from the basics of blockchain architectures to the technical operations and functions that are able to be executed on a blockchain. In my final paper I wrote about the problems with fully collateralised financial instruments (which most smart contracts are), estimating the increased cost of identical products due to cost of capital (in some examples, there was a 2-3 orders of magnitude increase in price), and recommending taking some legal architecture tricks out of disparate legal fields such as derivatives and property finance to help solve these problems. I hope to publish a more developed iteration of this paper in the near future.
This semester, I was involved with two large product development projects.
The first, ZCT, is an interesting project from both a legal and a technical perspective. It was inspired by a talk by the financial technology advisors to the Prime Minister of Japan that I attended at the Japan Society of New York last year, where I learned that by 2020, there will be more money transacted on our behalf, by IOT devices, AI and the like, than we will spend directly. Naturally, I immediately thought of the potential conflicts of interest. If a retailer provides you with an AI to stock your pantry, how can we be sure that the AI is acting in your best interest, and not simply trying to maximise the profit of the retailer?
Accordingly, I worked with two Operations Research students, Pablo and Gregory, as well as computer engineer, to build a model that assuages these fears—a trustworthy autonomous transaction model. This model sacrifices perfectly solving the problem in order to create this trust. Instead of a single agent optimising the supply and demand in a marketplace, we built an individual agency based model. Each user has their own AI (which can be built by anyone, as this is an open platform) which reduces their demand or supply to a bid or an offer. Our platform resolves these bids—using a second price auction to increase efficiency, each person is able to place their best bids knowing that they will still increase their utility by participating—and translates these bids into instructions.
This also proves to be a much more efficient algorithm than “solving” models, since all the hard work is done by each agent, rather than by the platform. Unlike existing solving models which restrict themselves geographically because of their exponential computational difficulty to scale, our system’s complexity increases linearly and, accordingly, we are able to optimise much larger systems.
Our demo use case assumed the existence of autonomous cars, and used our system to optimise our way out of traffic. We created an AI for this system. In this case, it took into account the user’s time delta (how much time it takes to get to a place less the amount of time required to get there), the user’s budget remaining as a percentage of their total budget, their aggressiveness (did they want to spend money on this system, or did they want transactions to net out over time) and their mood (which was an input from another biotic sensor startup, Symbiote), which we multiplied by the weights of each input, and the total remaining budget to create a bid. These bids were resolved by our second price auction model, with the result that autonomous cars on our network were able to bribe each other for priority in traffic.
This was more of an experiment than a startup, as there is a 5-10 year lead time on a critical mass of autonomous cars being on the road (although, only 10% of the cars need to be on our system for every person to arrive at their destination more swiftly).
The second major product development project I worked on was the opposite: a real product based on existing (albeit cutting edge) technology to solve an enormous pain-point in organisations.
I started working with Datalogue as they were negotiating their seed financing round, when there were two full time employees. Now, there are twelve people working in two different offices. This growth is to be expected, the company is addressing a problem that plagues every data-driven company. And, in today’s world, every company is a data-driven company.
Datalogue turns data into something that can be delivered on-demand. Before, once a data deal was done (which itself was a process), it took months to homogenize the data to the data already within the system. Datalogue uses deep learning (specifically, a melange of computer vision and natural language processing deep neural networks) to create a “smart data pipeline.” This pipeline understands the underlying ontologies in the data and allows for analysis between disparately structured data (and, anyone who has worked with data knows that even within an organisation, data is always disparately structured).
Working with Datalogue, I realised that this same technology, understanding the underlying concepts within data, could be invaluable to legal practice. At King & Wood Mallesons, I worked with the data, regulatory and privacy team in my fintech practice. At Cornell Tech, I learned enough about privacy law, especially European privacy law, for me to understand that it was a large regulatory cost-centre for international companies. All of the companies that I spoke to as part of my research wanted to comply, but some of them couldn’t see any way to build compliance in apart from manual auditing, which would take thousand of lawyer and engineer hours to accomplish.
The problem was that the regulations were based on legal ideas, “Personally Identifiable Information”, “Sensitive Information”, etc. These ideas evolved with the regulations. Accordingly, companies didn’t tag fields in databases as “PII”. Databases just contained it. There are even occasions where a single column, which is meant to contain a single type of information only, contains PII in some cells (I recently saw this when analysing NYC open data relating to business infringements of consumer law—the field was “business” and it contained both personal names, which would be PII, and company names, which wouldn’t be). There was no reason for the company to flag certain types of data within their systems before these regulations appeared. But now, to be compliant, these ideas needed to be mapped, as well as other ideas, like information pertaining to a person.
But this is exactly what Datalogue’s deep learning networks do, identify relevant ontologies, like PII (which itself would contain ontologies like Name (which would contain first and surname), Address (which would contain street address …) …) and information relating to Johanan Ottensooser. We found enough training data and built out our models, as well as an audit tool to give comfort to internal legal teams that the models are doing their job.
This transforms a multi-million dollar compliance regime into one that is quantifiably more accurate, and an order of magnitude cheaper.
Working with Datalogue shows the potential for a marriage between technology and industry—addressing regulation with technology allows businesses to comply with not just the words of the regulation, but to help solve the reason behind the regulation.
I was asked by the school to prepare the student address to faculty, student and the New York City technology community on graduation. (You can find the full talk here). In that talk I reflected on my key takeaways from the degree. Whilst technical learning was definitely critical, and studio based product development learning was amazing, the most important thing I learned was how difficult it is to assemble an interdisciplinary team, but what amazing results can arise from these difficult teams.
This can be applied more broadly—we shouldn’t consider our profession as one that services other people’s businesses. Rather, we should consider becoming part of the product development team. We shouldn’t advise, “you have built your product in a manner that exposes you to this or that risk”. Rather, we should actively help design products with compliance in mind.
I would like to take this opportunity to thank the BFSLA for giving me this opportunity. Without the scholarship, I would not have been able to take on this degree, I would not have been able to learn the law, product development and technical skills that are now central to my practice. Because of this scholarship, I have been able to research the two most interesting technologies to effect the law in the medium term: blockchain and artificial intelligence.