Post date: Jul 16, 2021 6:22:12 PM
Dear Leszek,
Following the humane algorithms discussion last Friday, there are two broad project ideas that might pique your interests.
The first is what we call the smarter mobility project which envisions a distributed computational solution to the “elevator-stairs” problem for individual travellers that relies on a merger of psychometric research in risk communication and risk appetite psychology with a stochastic and risk-aware version solution of a generalised Dijkstra algorithm. The idea is to fix the troublesome Google Maps, wrest it from commercial interest, and interactively customise it for an individual’s needs and dispositions. It features local stochastic optimisation and a feedback loop to generate more data. The public-facing website at https://sites.google.com/site/smartermobilitynetwork/workshops is perhaps the best introduction to the project, which at this early stage is essentially vapourware. Attached are a “pitch” to would-be graduate students with some rather inartful narration, and a longer introduction I used for a talk. Neither is rich in detail about the technical challenges the project expects to encounter. The paper “Optimising cargo loading and ship scheduling in tidal areas” (Le Carrer et al. 2020. European Journal of Operational Research 280: 1082-1094) peeks at the maths issues.
The second grand idea is even more pretentious. For scientists and engineers, computing is a lot different than managing data systems or websites. I teach both Python and Matlab to undergraduate engineers, and they mostly hate it. Although engineers are problem solvers, their skills, expectations and needs are very different from those of traditional programmers and certainly different from those of computer scientists, who are essentially mathematicians from an engineer’s point of view. The tools and provisions in, well, all the programming languages I know of are really woefully inadequate for our purposes. There are three features we think useful languages should have. We want to define a new high-level computer programming language that considers units like “meters” or “newton meters per second” as integral and native parts of numerical values, automatically handles uncertainty propagation and sensitivity analysis for all calculations, and has extensive built-in facilities for tracking and linking code elements and structure to justifications, data provenance records, assumptions, prior and subsequent analyses. The ideas are explained in https://sites.google.com/site/davmarkup/computer-language and https://sites.google.com/site/humanealgorithms/say/otherhumans. Nick Gray has been working on automating uncertainty analysis (https://www.overleaf.com/project/6050778244f8a80cc16ef909), which is the intellectually challenging part of this project.
Nick and I would be happy to talk to you about these projects any time.
Best regards,
Scott
Dear Scott,
I am sorry for the late response. Last month or so overwhelmed me mostly due to the end of the academic year and my heavy involvement in coordinating MSc projects (we have 250+ of them this year, a jump by 500% comparing to two years ago, plus the pandemic factor). This is on the top of other (including eternal) duties which coincided at the same time (perhaps bad luck, poor planning or a bit of both).
I understand that you received Sebastian's response in relation to Wolfram Mathematica language. Sadly we are not research active in the area of programming languages, but we really enjoyed reading and discussing your paper on ships and tides. The optimisation methods is were we feel much stronger. We had a couple of meetings in our Networks group in the mean time, to discuss our understanding of challenges in the humane algorithms area and in particular where are potential overlaps between our groups. I will be back to you with more concrete suggestions very soon.
Have a great weekend to everyone,
Best, Leszek
Dear Leszek,
We're proceeding with our work on what we call an "uncertainty language" which we intend to be a syntax for specifying and computing with general uncertainty structures that is epistemologically sophisticated yet shallow and simple enough for use by people with little or no quantitative training. It's part of a three-year project with Airbus funded by the UK government.
We have mathematicians, statisticians and programmers, but we're looking for some computer scientists to help address the big picture. We don't think the issues are terribly profound, but they are interesting and fashionable, and possibly important if they enable practicing engineers to up their game in uncertainty quantification.
Although you and your colleagues are not research active in programming languages, perhaps some of your 250 masters students would be interested in this area. We are used to working with multidisciplinary teams, and I think I could promise a good environment with collegial interactions for such students. We can make a list of possible project topics if this might be of use or interest to you.
Just a thought.
Good weekend.
Cheers,
Scott
From Leszek's colleague Sebastian Wild:
If you ask me, the next step towards that goal would be to pick some concrete, small, ideally approachable example of research work in either group and present that (interactively) to the other group in considerable detail.
Yeah, of course I agree completely. I’m not an engineer myself, and my experience with them is also that they often bite off more than they can chew and then settle for half-answers, which can be pretty unsatisfying intellectually. However, finding that nice, tractable bit to focus on is something of an art, isn’t it. Sometimes I don’t know what it is until I stumble onto the solution.
Maybe I have suggestion for a more bite-sized CS problem: a symbolic algebra simplifier to reduce the repetitions of uncertain variables in mathematical expressions. It is a problem in computer algebra and in the design of optimising compilers. Although it may not seem terribly glamourous, it absolutely could be a game changer for getting computers to do uncertainty analysis automatically, which would be enormously beneficial to society and individual humans. This is really a subproblem in the Puffin project, but one that’s been on the back burner for a while. It has plagued me for several years, and I even wrote some code to parse expressions, pattern match through them with “twiddling and dislocating” (see the attached “eccad janos” poster from an old computer algebra conference). But the problem was beyond my skills or at least my attention span. I employed some undergraduate and post-doctoral computer scientists to work on this problem without much progress, but I think it should be fairly easy for good student or a lecturer who can spare the time to think about it. I’ve always felt we just didn’t get the right people to work on this problem. It would bring a host of important messy problems into a realm where computers could solve them straightforwardly.