Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Sprint to extend some of the "read only" APIs, to finalize unit/integration testing for existing functionality, to extend the interface, to develop additional loaders, and to begin developing editing capabilities.

Priority things

  • JFW: For "semantic type" mechanism - if there are tree numbers, order by that and indent"
  • DONE: owl deployment
  • JFW :Refresh server deployments
    • UMLS, SNOMED, SNOMED VET, ICD (reload on machine)
  • DONE: JFW: QA queries
    • referential integrity
    • Verify for each type of loader and the sample data.
  • JFW: Jekins - mojo tests
    • REST integration tests
  • BAC: OWL - NCIt
  • BAC: consider adding top-level hierarchies as "semantic types" to concepts for loaders in general
    • This could be a function of the treeposition or transitive closure computers.
    • the corresponding metadata would have to be created as well
  • BAC: remove "void addXXX" and "void removeXXX" methods from model objects. Use getObjects().add/remove(...) only
  • DONE: Terminology sampler mojo  - bring over from old term server project.
  • Model
    • DONE: General class axioms
    • Mapping, MapRecord, MapEntry (each with attributes) - similar to Subset, SubsetMember except with one more layer.
      • Then implement this for RRF loader
      • Then implement this for RF2 loader (snapshot)
      • Then implement this for RF2 loader (delta)
  • User Interface
    • Enable glass pane while switching tabs - need to say increment in tab controller and decrement at the end of the controller.
  • Integration Tests - Jpa/REST - get them tested again
  • DONE: support "mode" parameter on loaders to automatically recreate db and clear indexes.
  • Have search handler work for all searching, including relationships and trees. - separate methods to build query?? - probably
  • DONE Improve search
    • Create "SearchHandler" as an extension point (like graph resolution handler).  Have a "default implementation"
    • Search algorithm
      • First search on exact string (e.g. "literal" search)
        • handles short strings containing special characters, consider doing a literal search (e.g. "!" or "+" or "Ca+")
          • alternative is to save untokenized forms of all strings for exact searching
        • e.g. for the query "+" use a lucene query like "terminology:SNOMEDCT AND version:latest AND atoms.nameSort:"\+" (not sure if you have to escape the + if it is in quotes).
      • Then search on matches - this is the normal search
      • The trick here is to combine the results from the first search and the second search into a single list, where the ordered results from the first search are at the head of the list followed by the ordered results of the second search - but with any duplicates removed.
        • could consider "(literal search) OR (normal search)" as a single query.
        • This is better because then lucene can do the paging
      • if no results, Then do spelling correct  and (then) acronym expansion and search again
        • Use Lucene SpellChecker class for this.
        • config/src/main/resources/data/spelling.txt
        • config/src/main/resources/data/acronym.txt
        • For obtaining words for spelling correction, use "FieldedStringTokenizer" with " \t" as delimiter
          • (consider this delimiter list later: " \t-({[)}]_!@#%&*\\:;\"',.?/~+=|<>$`^")
        • For obtaining words for acronym expansion, split only on " \t"
      • if no results, then try putting * after each term and search again
    • should autocomplete algorithm include acronym expansion? NO

...