Files sizes disagree when doing import




















You cannot save query results in NVivo, however you can export them to other applications e. Excel to save. Export query results. NOTE: If you include an aggregate node in the scope of a query, content coded to it or its direct children is included in the results. Aggregate nodes gather all content in a parent node. Results for text and region coding are shown on different tabs in the Detail View. Each row shows data for one node or case in one file. Results for single nodes or cases across all files in the query are not shown, nor an overall value for all nodes or cases and files.

Percentage agreement is the percentage of file content measured in characters, pixels or tenths of seconds , on which the two users agree that it should be coded to a specific node or case, or not.

For pictures or PDF regions, pixel ranges are used instead of characters, and for media files, tenths of seconds. The formula calculates the agreement between two coders and then adjusts for agreement that would happen by chance.

Both coders coded the same 40 excerpts and didn't code the same 30 excerpts, so they agree on 70 of the excerpts. The kappa coefficient for this example is 0.

You can see further examples of kappa coefficient calculations by downloading this Excel spreadsheet: Coding Comparison Calculation Examples spreadsheet. If two users are in complete agreement about which content to code in a file, then the kappa coefficient is 1. Values between 0 and 1 indicate partial agreement.

Different authors have suggested different guidelines for interpreting kappa values, for example from Xie, :. Kappa values can be low when percentage agreement is high. For example, if two users code different small sections of a file leaving most content uncoded, the percentage agreement is high, because there is high agreement on content that should not be coded.

But this situation is likely to occur by chance i. Conversely, if most of a file is not coded but there is agreement on the content that is coded, then percentage agreement is again high, but now the kappa value, too, is high, because this situation is unlikely to occur by chance. If all the kappa values in a query are 0 or 1 it may indicate that one of the two users being compared has not coded any of the selected files to the selected nodes, i. When merging projects with the intention of running coding comparisons, ensure that all documents and codes in the projects including coding structures match properly.

When configuring import:. See Merge projects or import items from another project. Reliability and inter-rater reliability in qualitative research: Norms and guidelines for cscw and hci practice. ACM Hum. Interact, 39 39 , Article 39, McHugh, M. Interrater reliability; the kappa statistic. Lin, C. Genest, D. Banks, G. Molenberghs, D. Wang Eds. The New Stack. Vries, H. Using pooled kappa to summarize interrater agreement across many items. Field Methods, 20 3 , — Xie, Q.

NVivo calculates percentage agreement and kappa coefficients for each combination of node or case and file. Note that the units of measure used in this calculation depend on the source type. For example, for documents the units of measure are characters, while for audios and videos the units of measure are seconds of duration.

One approximate set of guidelines for interpreting the value of Kappa is:. Because the Kappa coefficient calculation takes into account the likelihood of the agreement between users occurring by chance, the value of Kappa can be low even though the percentage agreement is high. For example, if most of a source has not been coded at the node by either user, but each user has coded completely different small sections of the source at the node, then the percentage agreement between the users will be high.

But since this situation would be highly likely to occur by chance i. Conversely, if most of a source has not been coded at the node by either user, but each user has coded almost the same sections of the source at the node, then the percentage agreement between the users will again be high. But this situation would be highly unlikely to occur by chance, so the Kappa coefficient is also high. These examples indicate why many researchers regard the Kappa coefficient as a more useful measure of inter-rater reliability than the percentage agreement figure.

A Kappa coefficient less than or equal to zero indicates that there is no agreement between the two users other than what could be expected by chance on which content in the source may be coded at the node.

This most often indicates that one of the two users being compared has not coded any of the selected sources at the selected nodes. When using the Import Project feature in NVivo to import coding:.

NVivo calculates the Kappa coefficient and percentage agreement individually for each combination of node and source. However, the results of a Coding Comparison query can be exported from NVivo as a spreadsheet using the Export List command to allow you to perform further calculations. If you wish to calculate an average Kappa coefficient or percentage agreement for a single node across multiple sources, or for multiple sources and nodes, you will need to consider how you want to weight the different sources in the calculation.

For example, do you want to treat each source equally, or do you want to give more weight to large sources than small sources? For some examples of how average Kappa coefficients and percentage agreements can be calculated from Coding Comparison query results exported from NVivo, download the Coding Comparison Calculation Examples spreadsheet.

This spreadsheet includes four examples with the average Kappa coefficients and percentage agreements calculated using spreadsheet formulas :. Average figures for a single node across 3 sources weighting each source equally.

Average figures for a single node across 3 sources weighting each source according to its size. Average figures for 5 nodes across 3 sources weighting each source equally. Average figures for 5 nodes across 3 sources weighting each source according to its size. If your project has different types of sources for example, documents and audios , you may need to give further consideration to how you want to weight these different sources since document size is measured in characters, while audio size is measured in seconds of duration.

Create, edit and manage queries. Contents Glossary. How is the Kappa coefficient calculated? How should the value of Kappa be interpreted? Why can the value of Kappa be low when the percentage agreement is high? What does a negative Kappa coefficient mean? All my Kappa coefficients are 0 or 1. Is something wrong? How can I calculate an average Kappa coefficient or percentage agreement across multiple sources or nodes?

Technical support Training.



0コメント

  • 1000 / 1000