If any of these topics provoke thoughts, shoot me an email so we can discuss!
These common programming tasks are hard and occasionally excessively time consuming.
What would programming look like if numerics were symbolic by default, with an integrated math engine? Machine native numbers can be opt-in for performance, of course.
DBs and PLs should use the same abstractions—not only the same primitive types, but PLs should be able to build indices and leverage a query planner over program-resident data.
What’s the fundamental difference between relations (relational algebra) and ADTs (recursive data)?
What’s the fundamental difference between OO programming and ADT-driven programming? Ultimately, you might be able to express both with structural typing, so what’s the human difference?
Many programming tasks are algorithmic realizations of known constraints. That said, neither trigger-action event systems like Eve nor constraint-oriented programming systems like Babelsberg seem to scale well to real problems. The issue may be that, although concise, constraint-based representations obscure the answer to the question, “When X occurs, what happens next and in what order?”. With ordinary, straight-line code, one can answer that question by walking the code. In contrast, a trigger-action constraint system is almost necessarily concurrent, and constraint solvers don’t necessarily explain how they intend to satisfy the constraints.
A good hybrid approach may be specifying the problem using constraints, but then statically generating the constraint-satisfaction algorithm (that is, program synthesis). The algorithm thus has the ordinary straight-line inspectability while maintaining a concise, high-level representation.
The idea of documents that are programs has two competing definitions: literate programming, in which the document describes the operation of the program, and program-backed documents, in which the document describes the output of the program. Perhaps they can be unified, but you’ll want to realize there’s a distinction.
When reimagining programming systems, one possible heuristic is to design for children (most notably followed by Seymour Papert and Alan Kay). Besides allowing for a “blank slate” design, the key advantage of designing for kids is that it forces you to think about the main concepts humans use to navigate the world and how programming systems can match those concept. The disadvantage is that designing for children is easily conflated with designing for learnability. Alas, “learnability” is a short-term concern. Too much concern about learning applies pressure to the system to provide short-term measurables rather than long-term utility. There’s a temptation to over-adapt to human preconceptions rather than design for essential simplicity. So design around the concepts that children know, but trust them to learn the necessary formalization of those concepts.
Single-metaphor reinventions of programming don’t work, whether the metaphor is computational (“everything is an object”, “everything is immutable”, “everything is just bytes”, “everything is a table row”, “everything is lazy”, “everything is eager”) or conceptual (“everything is in a hierarchy”, “everything is an instance of visual concept x”). The real hard problem is getting the different concepts to play well together: supporting multiple, ostensibly incompatible, first-class representations.