Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Switch to a default build of the SAMPLE_2014AB and redo metadata and content tests against that data set.  Then we may be able to have more complete hierarchies and some code data to work with.
  • Consider all query-based lookups in ContentServiceJpa and reimplement as lucene lookups instead if it seems that it would improve performance.
  • Finalize REST service for a general concept query (e.g. lucene query, HQL query, or SQL query)
    • Must start with "SELECT " and not contain ";"
      • provide appropriate error message via exception if not
    • Execute the query but check the object type of the first result
      • if doesn't match expected type, fail with an exception.
    • Use timesout = 
    • query.setHint("javax.persistence.query.timeout", timeout);
    •  
  • Implement NewConceptMinRequirementsCheck
    • Requires that a concept has at least one atom and one semantic type.
    • Create an integration (jpa) tests for the validation layer (e.g. put in spreadsheet, normal use, degenerate use, edge cases, etc).
      • It may be desirable to organize integration test packages by terminology (e.g. "com.wci....validation.umls" for UMLS checks ".snomed" for SNOMED checks, etc).
    • Implement the same check also for a SNOMED concept.
  • Properly implement terminology remover functionality (e.g. content service rest.removeTerminology).  The content and the metadata objects need to be removed in the right order so as to avoid foreign key constraint errors. (e.g. attributes, definitions, semantic types, tree positions, transitive relationships, relationships, atoms, then atom classes - something like that).  For metadata I think terminology/rootTerminology is the only dependency (remove root terminology first).
    • Then update the mojo test case to remove the terminology (currently commented out)
    Consider all query-based lookups in ContentServiceJpa and reimplement as lucene lookups instead if it seems that it would improve performance.

User Interface Enhancements

...

  • Need a "subset ancestors" computer (like reverse transitive closure) to idenitfy all of the ancestors of members of a subset that are not in the subset itself.  This is for a tree browser to be able to show you the path to your content.  Thus each "subset" will have two subsets, the subset itself (publishable) and the collection of ancestors (not publishable).  Then, in tree browser ,we can look up the subset memberships (if desired ,through graph resolver) of the level of the tree being browsed and then have a means to indicate that desired subset content exists in this subtree)
    • This works in conjunction with "tree browser" - there can be a picklist where you can choose a subset and then see that subset's data within the context of the tree (one subset at a time).  Or you can pick "no subset".  When enabled, this causes subset lookups to occur (can even really be a subsetquent call as delayed highlighting is fine)
  • "smart" RRF loader should support a config file to indicate what level at which definitions should be attached and should handle RXNORM and NCIt concepts.
  • RF2 snapshot loader for SNOMED CT (directly from RF2 - initial work done)
    • Need to create metadata
    • RelationshipType will be same as UMLS (PAR/CHD/RO)
    • AdditionalRelationshipType will be the typeId from relationships file
    • TermType will typeId from descriptions file
    • NO STYs
    • AtributeNames - correspond to the field names from RF2 files
  • Owl loader (e.g. for NCIt) - will require use of "DL" features
    • Also have a corresponding Owl export feature (e.g. "release")

     

...

  • Project
    • Figure out how to capture "project scope" for SNOMED and for UMLS in a generalized way.  Update project objects to be able to properly capture (and compute) project scope.  NOTE: the scope definition may involve concepts/terminologies/semantic types.  IN that event, the scope computer gets a little bit more complicated.
  • Test loading a DB with envers auditing disabled and then making changes in a DB while it is enabled. Does it properly create audit entries?
    • for the old edition of the component?
    • for the new edition?
  • Metathesaurus editing actions
    • MetathesaurusContentEditingRest
      • methods for each "edit action"
      • Create a RestImpl
      • Create a client
      • Create integration tests to run against a "stock" dev  database
    • Add a semantic type component, Remove a semantic type component
      • Have a pencil icon by  the STYs section
      • clicking gives you a list of available STYs assigned, in tree order with a filter box for typing characters of the STY you want.
        • See the metadata "semantic type" option
      • User may want to choose multiple ones (so have a "finished" button)
      • Dont allow user to choose STYs already assigned to the concept.
      • Final action is to call "contentService.addSemanticTypeComponent"
      • Consider what happens to workflow status
      • Consider how to show "NEEDS_REVIEW" content in the browser
      • Consider how to support "Undo". - perhaps an action log (atomic/molecular) is a good idea still for that
    • Implement this completely including testing before moving on to other actions (each which requires a UI enhancement)
      • Approve a concept (e.g. set workflow status values).
      • Add an atom (e.g. MTH/PN - that needs to be generalized somehow)
      • Merge two concepts (consider the "workflow status "when this happens).
      • Move an atom (or atoms) from one concept to another
      • Split an atom (or atoms) out of a concept and specify a relationship type between the two concepts

...

      • concept

Admin Tools

  • RRF Loader
    • Finish metadata loading
    • Do content loading
  • RRF Loader - single source

...