Refine
Year of publication
Document Type
- Article (92)
- Part of a Book (10)
- Conference Proceeding (10)
- Report (3)
- Book (1)
Keywords
Institute
- Medizinische Fakultät (71)
- Universitätsklinikum (71)
- Fakultät für Angewandte Informatik (31)
- Institut für Informatik (29)
- Lehrstuhl für Datenbanken und Informationssysteme (25)
- Lehrstuhl für Epidemiologie (20)
- Lehrstuhl für Nuklearmedizin (14)
- Nachhaltigkeitsziele (11)
- Ziel 3 - Gesundheit und Wohlergehen (11)
- Lehrstuhl für Neurologie (9)
Introduction
Whole Exome Sequencing (WES) has emerged as an efficient tool in clinical cancer diagnostics to broaden the scope from panel-based diagnostics to screening of all genes and enabling robust determination of complex biomarkers in a single analysis.
Methods
To assess concordance, six formalin-fixed paraffin-embedded (FFPE) tissue specimens and four commercial reference standards were analyzed by WES as matched tumor-normal DNA at 21 NGS centers in Germany, each employing local wet-lab and bioinformatics investigating somatic and germline variants, copy-number alteration (CNA), and different complex biomarkers. Somatic variant calling was performed in 494 diagnostically relevant cancer genes. In addition, all raw data were re-analyzed with a central bioinformatic pipeline to separate wet- and dry-lab variability.
Results
The mean positive percentage agreement (PPA) of somatic variant calling was 76% and positive predictive value (PPV) 89% compared a consensus list of variants found by at least five centers. Variant filtering was identified as the main cause for divergent variant calls. Adjusting filter criteria and re-analysis increased the PPA to 88% for all and 97% for clinically relevant variants. CNA calls were concordant for 82% of genomic regions. Calls of homologous recombination deficiency (HRD), tumor mutational burden (TMB), and microsatellite instability (MSI) status were concordant for 94%, 93%, and 93% respectively. Variability of CNAs and complex biomarkers did not increase considerably using the central pipeline and was hence attributed to wet-lab differences.
Conclusion
Continuous optimization of bioinformatic workflows and participating in round robin tests are recommend.
Deep brain stimulation (DBS) is a highly efficient, evidence-based therapy for a set of neurological and psychiatric conditions and especially movement disorders such as Parkinson’s disease, essential tremor and dystonia. Recent developments have improved the DBS technology. However, no unequivocal algorithms for an optimized postoperative care exist so far. The aim of this review is to provide a synopsis of the current clinical practice and to propose guidelines for postoperative and rehabilitative care of patients who undergo DBS. A standardized work-up in the DBS centers adapted to each patient’s clinical state and needs is important, including a meticulous evaluation of clinical improvement and residual symptoms with a definition of goals for neurorehabilitation. Efficient and complete information transfer to subsequent caregivers is essential. A coordinated therapy within a multidisciplinary team (trained in movement disorders and DBS) is needed to achieve the long-range maximal efficiency. An optimized postoperative framework might ultimately lead to more effective results of DBS.
Skyline evaluation techniques (also known as Pareto preference queries) follow a common paradigm that eliminates data elements by finding other elements in the data set that dominate them. To date already a variety of sophisticated skyline evaluation techniques are known, hence skylines are considered a well researched area. Nevertheless, in this paper we come up with interesting new aspects. Our first contribution proposes so-called semi-skylines as a novel building stone towards efficient algorithms. Semi-skylines can be computed very fast by a new Staircase algorithm. Semi-skylines have a number of interesting and diverse applications, so they can be used for constructing a very fast 2-dimensional skyline algorithm. We also show how they can be used effectively for algebraic optimization of preference queries having a mixture of hard constraints and soft preference conditions. Our second contribution concerns so-called skyline snippets, representing some fraction of a full skyline. For very large skylines, in particular for higher dimensions, knowing only a snippet is often considered as sufficient. We propose a novel approach for efficient skyline snippet computation without using any index structure, by employing our above 2-d skyline algorithm. All our efficiency claims are supported by a series of performance benchmarks. In summary, semi-skylines and skyline snippets can yield significant performance advantages over existing techniques.
Complex application domains like outdoor activity platforms demand a powerful search interface that can adapt to personal user preferences and to changing contexts like weather conditions. Today most platforms offer a search technology known as Faceted Search, also named Parametric Search, where a user iteratively adapts his/her search parameters by a tedious and time-consuming trial-and-error process until the quality and quantity of the query results somehow corresponds to his/her expectations. This process gets even more cumbersome in mobile environments. Here we present a sophisticated approach called Preference Search, which we have prototypically implemented in a commercial outdoor activity platform. Preference Search replaces lengthy user sessions by one single user request. Technically, this request is automatically compiled into one single Preference SQL query, which efficiently retrieves those items that best match the user's expectations within the current context. A benchmark was applied to Faceted Search as well as Preference Search. The evaluation of the benchmark indicates that Preference Search substantially improves the user's search satisfaction in comparison to Faceted Search.
Preference queries become more and more important in applications like OLAP, data warehousing, or decision support systems. In these environments the Preference SQL GROUPING operation and aggregate functions are extensively used in formulating queries. In this report we present the full specification of the GROUPING operation in Preference SQL. This specification describes the grouping and aggregation known from standard SQL as well as the grouping with substitutable values (SV) semantics to allow a flexible and powerful grouping functionality in comparison to standard SQL. Furthermore, we introduce novel algebraic transformation laws for grouped preference queries and numerical ranking which are one of the most intuitive and practical type of queries. We explain how Preference SQL can be modified to integrate these optimization laws into the existing rule-based query optimizer. Our study upon the well-known TPC-H benchmark dataset shows that significant performance gains can be achieved.
Adressen
(2008)
anaging large and confusing sets of increasing data is a well-known problem in Data Mining. Since compromises in many use cases like Recommender Systems or preference-based applications are becoming more and more usual, it is very useful to cluster sets of promising results in order to get an overview and present them properly. In this paper we present the Pareto-dominance as a very suitable and promising approach to cluster objects over better than relationships. In order to meet someones desires, one can tip the balance of the final results to the more favored dimension if no decision for allocating objects is possible.