Wikipedia redirects “Syntax Directed Editing” to the article “Structure editor”, and claims the phrases are synonymous.
“In linguistics, syntax is the study of the structure of grammatical utterances, and accordingly syntax-directed editor is a synonym for structure editor. Language-based editor and language-sensitive editor are also synonyms”
I started this thread mostly because I was surprised to see correspondence between prosemirror and this esoteric / forgotten approach to code editor architecture. The author indicated that SDEs AKA Structure Editors are rather exotic and not widely used or developed. But the WP article implies it is more well known. That said, the most recent reference in the citations is from 2000, and the majority are from the 1980s.
I’m generally interested in ways Structured Editing techniques could be employed to make better tools for writing, working with, reusing, and verifying documents, data, and files that are not intrinsically executable or structured. Gentle, unobtrusive automated ETL assistance, data capture / entry, file naming folder organization, content (re)discovery, staleness checking, progress reporting, document templating, intent modeling & workflow optimization…
The (ridiculously named) hottness these days is “robotic process automation”, basically DSLs and orchestration platforms to assist business information workers in being more efficient by automating otherwise manual (read: mouse) tasks, esp those that involve shuttling digital works-in-progress through an assembly line of sequential software tools that are not well integrated and may never offer official APIs powerful enough to enable deep integration. In effect, as more and more work is conducted digitally, workers are increasingly needed to fulfill the function the conveyer belt in an otherwise purely digital assembly line. The inputs are all digital. The tools are all digital. The output is all digital. The individual tools can be automated to some degree. But expert human operators are required to set up the tool, install it “on the line,” and very often direct or supervise its operation on each unit of work.
Why can’t our software assembly lines partially configure themselves, or at least suggest 10 best-guesses at how they might be configured, given 1) a set of representative inputs to to the work process (csvs, expense reports, SOPs, pdf forms, etc); 2) a couple of key words related to the desired output and/or representative mock output files; and/or 3) a constantly-refined markov model of previous workflow configurations based on observing / inferring process models from all previous file, disk, and application activity that seemed to be contingent a given Finished Document that was identified as such when it was emailed to the boss / client / printer. This is Process Mining. The data are probably already captured by IT departments in digital forensic log / enterprise antivirus / security database.
While 9 out of 10, or even 99 out of 100 of these autosuggested configurations or workflow scaffolds might be semantically invalid, if it only takes 10 minutes to evaluate all of the guesses, 1 hour to do a first pass to validate the behavior of the most promising prototype, but it alternatively will take 10+ hours to do the same manually… then man, why are knowledge workers effectively doing menial tasks akin to workholding, material transport, setting up and connecting the machines needed for the production run, etc.
Is anyone else interested in implicitly / inferentially structured data, “structured editing”, or in general trying to figure out how to build tools that reduce manual labor in digital assembly lines?