Replies: 6 comments 3 replies
-
Additional thoughts from Sharat and Qian: 6. Given a disease/condition, what all drugs are prescribed for it? And same question in reverse. The reason for Question 1 is this. A very important factor for a drug researcher is the mechanism or pathway. If they can get the Pathway (eg MyD88 signalling) between drug to disease from any ARS/ARA result, or their own expertise, then they will want to query which other diseases does that Pathway MyD88 affect (Question 12 in Sandrine’s list, the next priority). Then they will want to know whether there are already existing (or in-trial) drugs for that other disease, which may use the same Pathway (MyD88) – and see how effective they are. This can help with the ordering of that result. 7. Can above answers be filtered by “recently approved” drugs? Drugs in clinical trial? 8. Given a disease, what are groups/clusters of drugs and diseases? |
Beta Was this translation helpful? Give feedback.
-
Regarding Kara's original list: For #2, using observed-expected frequency ratio or p-values (as opposed to raw co-occurrence) could work well for this, but we may have to change our paradigm on how we're returning results. Currently, COHD's TRAPI endpoint is only returning an edge when the pair of concepts have a significant association. But to label the unknowns, we may want to return info on all relevant edges, including the non-significant ones, so that we can distinguish between the edges "with sufficient power/sample size but no significant association" vs edges "without sufficient power/sample size and no significant association". This may then require a different predicate (e.g., For #4, I'm currently not sure how we can specifically say that we would like to increase the confidence of an edge. I think each edge currently just has a one-dimensional score, which I guess could include many possible factors (e.g., confidence, relevance, effect size, novelty, etc). I've mentioned in the past that I think it would be very useful for Translator to be able to score these dimensions separately, but that's never gotten much traction. Can someone please provide a concrete example for #5? I don't yet see how this is very distinct from the combination of #1 and #2. Regarding Sharat and Qian's list: For #6, COHD doesn't have any direct measure of efficacy, but perhaps relative frequency can be used as a proxy, i.e., hopefully the relative frequency which patients with disease X gets prescribed drug Y correlates with the efficacy of drug Y. However, this would have to be filtered to only include drugs associated with the disease (or known to treat the disease), otherwise you'd just get the most generally common drugs ranked at the top. For #8, OpenPredict has disease-disease and drug-drug similarity calculations based on cosine similarities of embeddings that could be useful here. COHD disease-disease associations would be more reflective of comorbidities. |
Beta Was this translation helpful? Give feedback.
-
[FYI: I just renumbered the suggestions above, so that we can more easily refer to them.] Re 2: ICEES KG would be able to support this, and it sounds like COHD would too, with a (minor?) adjustment in the alpha value for significance. I don't think we'd need to change the predicate, as the edge attributes will allow the user to draw his/her own conclusions. Re 4: This was my attempt to capture your suggestion, Casey. Please feel free to revise the description. I think you may be correct about the scoring, but that would be something for O&O to consider, I think. Re 5: I suggested this (actually, I think it was Greg's idea originally) as a simple approach for selecting associations that are found in the real world. For example, selecting edges on drug X and disease Y associations but restricted to those edges identified by way of real-world evidence (i.e., contributed by clinical KPs). Re 6: The clinical KPs can provide associations between diagnosed diseases and prescribed drugs, with caveats of course. However, things like side-effects (very hard to define), efficacy (I think you mean effectiveness), and off-label use (very hard to identify) probably are a bit out of scope. That said, as you note, "novelty" or distinguishing the "known" from the "unknown" probably is within scope. Perhaps of interest, ICEES KG exposes data on adverse outcomes (e.g., ED visits for respiratory issues, liver transplant, ventilation), which may be helpful here, but again, there are caveats. Re 7: This information would need to be provided by a non-clinical KP. Re 8: Like COHD, ICEES KG can provide disease-disease and drug-drug associations. Unlike COHD, ICEES KG is cohort-based by design, so the associations would apply to a specific cohort. For example, the ICEES KG asthma instance exposes data on a cohort of roughly 160,000 patients with a diagnosis of asthma or a related common pulmonary disorder (identified by way of a complex algorithm that factors in diagnostic codes, relevant labs, etc). As such, any disease-disease or drug-drug associations that are identified would apply to a specific cohort. This consideration may add specificity to the proposed query. Casey's suggestions are also worth consideration. |
Beta Was this translation helpful? Give feedback.
-
Re (2): I think we need clarification from the ARAs on their expectations and from the Biolink folks on the choice of predicate. For instance, the Biolink definition of I like the idea of an overlay operation instead of (or in addition to?) a lookup. I am wondering what others think. If we converge on (2) as an immediately actionable approach for incorporating clinical evidence to support O&O, then we should definitely engage the ARAs and the Biolink folks sooner rather than later as we move toward implementation prior to the February relay meeting. |
Beta Was this translation helpful? Give feedback.
-
January 9, 2023 update from Kara by way of Slack: Eight votes in favor of moving forward with Approach 2, with one vote against. Based on the vote and related discussion, I think we have reached general consensus on moving forward with a manual pilot test of Approach 2 in the context of the MVP1 templated query what drugs may treat disease X? However, we will also continue our discussion on other approaches for incorporating clinical evidence into O&O. In terms of the disease we should focus on for the pilot test of Approach 2, I'd suggest that we focus on the rare pulmonary diseases that we identified for MVP1 and that we are using to drive development of the TCDC CARA.
|
Beta Was this translation helpful? Give feedback.
-
In terms of specific diseases, we have 2 thumbs up votes and 2 no strong opinion votes for cystic fibrosis and asthma. While we certainly don't have sufficient votes to formalize a decision, I suggest that in the interest of time, we move forward with these two diseases for testing. As such, I ran two UI queries at ui.test.transltr.io, which attaches scores to answers, in addition to evidence (publications). Cystic fibrosis UI results: https://ui.test.transltr.io/results?q=81eed809-787d-43d6-a0ab-252a1151d3e3 Asthma UI results: https://ui.test.transltr.io/results?q=4de7052a-51e8-4967-ab52-9d76ee1bf2c3 How do we retrieve results in a format that is suitable for testing? If I plug the PK into the ARAX UI, I can see results for each ARA or KP who responded, but I cannot obtain a complete list of results, preferably with scores. If I pull the results directly from the ARS, I can retrieve a JSON file, but like the ARAX UI, this file contains the ARA- and KP-specific PKs, but not a complete list of results, preferably with scores. Here is a link the spreadsheet I would like to populate for testing. We will need to brainstorm regarding next steps. |
Beta Was this translation helpful? Give feedback.
-
This discussion thread is intended to promote discussion and poll TCDC members about compelling and useful approaches for incorporating clinical evidence into O&O. Please comment and/or add new suggestions. We'll take an informal poll after sufficient discussion has taken place.
Possibilities include the following:
1. One-hop clinical support for relative safety of candidate drugs
2. One-hop clinical support for inferred answers: threshold for the “known” and “unknown”
- From Sharat: this is essentially the same as Sharat/Qian's suggestion 6d.
3. One-hop clinical support for weighting literature-based scores: weighting clinical evidence vs research literature evidence
- From Sharat: this could really help with O&O because confidence levels are so important for everything, and literature evidence does not have it
4. One-hop clinical support for adding confidence to text-mined results and vice-versa
5. Clinical support for grouping answers: RWE
- From Sharat: yes certainly, a good control
Beta Was this translation helpful? Give feedback.
All reactions