Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • ValidationResult precheckLoadFromSourceData
  • Application hardening
    • Error handling during configuration (bad db, username, password)
    • directory already exists?
    • DONE failed load
      • sourceData marked as FAILED -> only support certain actions
    • DONE cancelled load -> allow reload of delta
    • source data missing, some terminology data loaded
  • spelling.txt, acronyms.txt
    • package these files into the war. (so they are in WEB-INF/classes)
    • getResourceAsStream("spelling.txt") -> source.data.dir/spelling.txt
  • Terminology Starter Kit - ...
    • Run the .war file via Jetty all packaged as an exectuable .jar
    • Single zip file download containing
      • termserver.jar
      • data/ (spelling.txt, acronyms.txt)
      • indexes/ (lucene indexes)
      • hsqldb/ (database files) 
    • config.properties
    • StarterKitApplication (e.g. the jersey application)
    • Maven process
      • ssk-rest (module)
        • Build .war file
        • .war file must contain a completely ready config.properties file (e.g. in WEB-INF/classes) - put in src/main/resources 
          • no filtering required
          • exact hsqldb jdbc url, user, password
          • indexBase=./indexes
          • spellingFile=./data/spelling.txt
          • security.guest.disabled=false
          • mail server??
          • landing/ login/ intro page configuration
      • ssk-app/ (module)
      • pom.xml
        • Load database  (Rrf2SnapshotSourceDataLoadMojo)
          • specify indexes directory        (${project.build.dir}/app/indexes
          • specify hsqldb directory          (${project.build.dir}/app/db)
          • specify snomed (or input dir)  (-Dinput.dir)
        • Gather all resources into ${project.build.dir}/app
          • packages .war file (from ssk-rest) as an executable .jar that runs jetty, snomed-starter-kit.jar
          • package spelling.txt and acronyms.txt files in ${project.build.dir}/app/data
        • Use assembly plugin to zip everything from ${project.build.dir}/app/* into snomed-starter-kit.zip
  • DONE: Get initial starter kit running
  • spelling/acronyms data -> load, need source data loaders
    • copy "default" spelling/acronym files into the source data dir
    • then automatically load them 
    • support abiltiy to remove/reload as well
    • configure service ? source data service?

...

  • TODO: Rf2Full LoaderAlgorithm - move "Look through files to obtain ALL release versions" logic to the file sorter.
    • sorter.getReleases(): List<String>
  • TODO: Verify that concept subset members and atom subset members should appear (graph resolver)
  • TODO: SourceDataFileUtility - error handling for  extractCompressedSourceDataFile
    • // TODO Delete any successfully extracted files on failed load

  • TODO:SourceDataServiceRestImpl - URLs against content service and update java and js clients.
  • TODO: popout in content controller should call a contentService method - controllers shoudl never have URL fragments.  Also make the "simple" part a parameter.
  • TODO: remove the part of content controller that picks ICD10CM over ICD10 - actually, probably better to just remove ICD10 from CLAML load (UTS license doesn't cover it anyway).
  • TODO: Generalize the handling of "simple" in isTabShowing of tab controller
  • TODO: don't show Precedence list if it is empty (or say  "Precedence list: (EMPTY)")
  • BAC: Remove PfscParameter.
  • Bring TypeKeyValue over from tt project (including loader mojo and configurations), update tt pom.xml too.
  • Rework spelling/acronyms to be TypeKeyValue data instead -> rework spelling correction and algorithm lookup handlers.
  • loader "description logic"
  • App configuration -> don't require configure service.  , thus allowing the Jersey Application doesn't need it
    • contentService.js would just check the appConfig.isConfigured??.
  • Mapping REST APIs (if not already there)
  • Content service call for finding paths betwen two "component info"

    • e.g. CUI "distance".  with parameters to control types of relationships (could consider ECLish for this too).
  • Enhancements for ECL
    • General
      • The way expression builder is used is different than treepos/transitive closure.  In particular, it’s not wired to properly  handle “cancel”.  We should declare at the top level and instantiate like we do with the others
    • Rf2DeltaLoaderAlgorithm
      • Like with transitiveClosure and treePos, we need to have a “reset” that gets called for expressions so that we clear whatever is there first, THEN recompute the full expression index.
      • The Ecl “reset” method should clear the indexes for the given terminology/version
    • RrfLoaderAlgorithm
      • Should implement this, just like for Rf2Snapshot. (but for each source, see how tree position computer works)
    • ClamlLoaderAlgorithm
      • Should implement this, just like for Rf2Snapshot.
    • OwlLoaderAlgorithm
      • Should implement this, just like for Rf2Snapshot.
    • Advanced search
      • For “description logic” sources, we can show/support the full ECL
      • For non-“description logic” and non-“metathesaurus”, if we’ve computed indexes, we can still support a limited form of ECL.  In particular, I’m thinking about supporting “descendant” searches.  The “semantic type” does this at the top level, but there’s on reason we can’t arbitrarily support it at a lower level.  All non-metathesaurus sources have a hierarchy.
    • We should add ECL testing to the mojo integration tests
      • e.g. perform several test searches with ECL to verify they return results. 
      • That way we're validating that the loader is computing indexes.

...