Доставка піци Світловодськ 096 907 03 37
Доставка піци Світловодськ 096 907 03 37

Доставка здійснюється з 10:00 до 20:00.

Доставка піци Світловодськ 096 907 03 37

Доставка здійснюється з 10:00 до 20:00.

Intra Observer Agreement

by on 28.02.2022 in

Results Of the 100 patients observed, half were male and all white; the median age (SD) was 69.7 (14.1) years. The Intraobserver agreement was essential to be almost perfect for VF (total κ [95% CI], 0.59 [0.46-0.72] to 0.87 [0.79-0.96]) and similar for OCT software (total κ [95% CI], 0.59 [0.46-0.71] to 0.85 [0.76-0.94]). The interobserver agreement between the 5 glaucoma specialists with the VF progression software was moderate (κ, 0.48; 95% CI 0.41-0.55) and similar to OCT progression software (κ, 0.52; 95% CI 0.44-0.59). The Interobserver agreement was substantial for images that were classified as not without progression, but only for images that were classified as questionable glaucoma progression or glaucoma progression. The interobserver agreement was fair on glaucoma progression issues (κ, 0.39; 95% CI 0.32-0.48) and consideration of treatment changes (κ, 0.39; 95% CI 0.32-0.48). Factors associated with the match were the stage of glaucoma and the difficulty of the case. Proc freq data=kappa_test; Table observer1*observer2 / agree; Running; The guidelines for studies on the reliability of reports and agreements [6] include 15 issues that need to be addressed in order to improve the quality of reports. This can lead to separate publications on agreement and/or reliability that are outside of the main study, as Kottner et al. [6] have said: “Studies may be conducted with an emphasis on estimating reliability and compliance themselves, or they may be part of broader diagnostic accuracy studies, clinical trials or epidemiological investigations. In the latter case, researchers report consistency and reliability as quality control, either before the main study or using data from the main study.

Typically, results are reported in just a few sentences, and there is usually little space for reports. Nevertheless, it seems desirable to address all the issues listed in the following sections so that the data are as useful as possible. Therefore, reliability and compliance estimates should be reported in another publication or as part of the main study. Our study had limitations. First, the sample included patients with variability between different tests, which led to more doubts when identifying progression or lack thereof. However, many patients in clinical practice show variability between tests. In addition, the same examiner performed all oct analyses, which excluded the examiner as a factor of variability. Secondly, Cirrus OCT scans were obtained without eye trackers. An eye tracker has only recently been included in the Cirrus software. Third, specialists have not received recommendations for evaluating printed pages; For example, some experts could have put more emphasis on event analysis and others on trend analysis. However, we preferred not to give advice so that specialists could interpret the progression of glaucoma as in daily clinical practice. The follow-up period was short (up to 101 months).

However, the Cirrus is Carl Zeiss Meditec`s latest generation OCT device and has only been available for clinical use for 8 years. In addition, the number of scans or the follow-up time was not related to the Interobserver agreement. Finally, only a subjective assessment of matching was assessed and not accuracy or precision; The objective of this study was to determine the progression agreement with both software packages in a group of patients in order to facilitate clinical decision-making on treatment progression or changes. Despite these limitations, our results may contribute to the knowledge of the benefits and problems of both software packages in detecting glaucoma progression. Kappa values, which reflect the degree of compliance, are listed in Table IV. The kappa score for all embryologists (0.734, 95% CI: 0.665-0.791) is a good match. The level of experience as an embryologist, the degree of research experience and the number of days per week in embryo classification did not have a significant influence on the kappa conformity coefficient (Supplementary Table SI). The main cause of weak agreement on questionable responses or progressions could be the variability of different tests. There are many causes of variability, such as.

B patient-reported alertness, increased age, test variability, glaucoma stage, test duration, and eye movement fixation.19-22 Previous studies have shown that even experienced examiners have difficulty assessing progression in reliable VF series when tests are variable.1 Consensus has shown that measurement variability affects the capacity of a device at 23 The goal of the This study was not to identify the causes of variability in OCT or PFD scans from Humphrey software. There are 4 commonly used measures of variability: interval, interquartile interval, variance and standard deviation. The only available data used to analyze the variability of the expression pages of the Humphrey progression software were the 95% CIs of the PFD trend analysis, while the standard deviation of the PFD trend analysis was used in the Cirrus OCT instrument progression software. In the current 100 patients, we found that the 95% CI was higher (greater than 1%) than the mean PFD in 32 patients and an above-average standard deviation in 31 patients in the mean rnfl thickness of OCT. In patients with a high change value of PFD or RNFL thickness with high variability between tests, the case is diagnosed as progression. However, in patients with a low change value but with high variability between tests, it is difficult to distinguish between variability and actual progression (figure). A previous report24 showed that the fluctuation of VF is low in normal eyes and those with early glaucoma, increasing from stage 0 to 4 of the glaucoma staging system. Similar results have been published suggesting that thin RNFL in patients with advanced glaucoma was associated with greater variability in RNFL thickness.25 These results suggest that variability between different tests is the cause of the low agreement in the present study. observe1 <-c(R), "R", "F", "F", "F", "F", "F", "R", "R") observe2 <-c(R), "R", "W", "F", "W", "W", "R", "W", "F", "R", "R") One or more months later, the 100 images were randomly rearranged, assigned to a new ID number from 1 to 100 and uploaded back to a new shared Dropbox folder. .

. .