Using Common Sense Privacy Rating in Your School’s Vetting Process

As the conversations swirl around the application, services, vendor vetting, and AI data concerns (see Nick Marchese recent ATLIS post – AI tools that use data prompts and user data to train models), I am curious about the criteria used in the evaluation.

This has been an ongoing question for us at my school and one we continue to work on and explore – thanks, Erica Budd. As we look at how we will tackle this question, we have a few things to consider in our current process. It is the last I have wanted to ask this community their thoughts. 

  1. Has the application, service, or vendor been vetted against our existing vetting form? (included in the ATLIS 360 Compansion Manual)
  2. Are there other similar applications, services, or vendors we use that do similar, if not the same thing?
  3. If using “Sign in with Google,” is the app “Verified” by Google, and what services does it have access to? (more info on “verified third-party apps”)
  4. How do “Common Sense Privacy Ratings” score the application, service, or vendor? 

The Basic Question: How do you use the Common Sense Privacy Ratings in your vetting process, and what categories are most important to your process? Stop reading, skip the rest, and share your thoughts in the comments.

The Detailed Question: There are numerous levels and details with the Common Sense Privacy Ratings, and how do you leverage them with faculty and staff within your vetting process? How do you balance the base-level rating and score(s) against the individual category ratings? See the details below and share your thoughts in the comments.

Common Sense provides free ratings and evaluations for many commonly used tools. The Ratings are relatively simple: Pass, Warning, or Fail scoring tools. The pass criteria include the following criteria:

  • No selling data
  • No third-party marketing
  • No targeted advertisements
  • No third-party tracking
  • No tracking across apps
  • No profiling for commercial purposes

A warning means that “the product is unclear or says they engage in one or worse privacy practices that are prohibited in the Pass rating…” and fail implies that “the product does not have a privacy policy, or the privacy policy is not sufficient, because it does not disclose any details about the worse privacy practices prohibited in the Pass rating…“. 

This is the simplest rating they provide, and faculty or staff members can give this rating a look at whether the tool will likely pass a full review. However, many of us will need greater depth before approving a tool.

They provide three layers to their evaluations to provide a greater level of depth. The Quick and Basic can be used by faculty and staff in their initial look to provide additional context to the question of use. The Full evaluation is what I believe is the best for the work Tech and Ed Tech members need to use in their process. 

  1. Quick Evaluations – Based on 6 rating questions – Includes Pass, warning, or Fail ratings. No Overall Score or Evaluation Concern
  2. Essential Evaluations – Based on 30 rating questions – Includes Pass, warning, or Fail ratings. Includes Overall Score and Evaluation Concern
  3. Complete Evaluations – Based on 155 rating questions – Includes Pass, warning, or Fail ratings. Includes Overall Score and Evaluation Concern

The Basic and Full evaluations look at “Evaluation Concerns,” which receive a score ranging from Best to Poor (Best (81-100) | Good (61-80) | Average (41-60) | Fair (21-40) | Poor (0-20) and are evaluated within the following categories and this is where the fun begins.

Within each of these categories is a subset of measures used for evaluation. Each category is scored based on the subset of measures defined by Common Sense. For example, Kahoot is rated as Pass with an overall score of 82% (Best); however, when you look at the score within the ten areas outlined in the Evaluation Concerns, you need to look deeper as these numbers may tell a different story with only three of the ten category measures equalling the overall BEST score of 82%.

  • Data Collection – 70% – GOOD
  • Data Sharing – 70% – GOOD
  • Data Security – 45% – AVERAGE
  • Data Rights – 95% – BEST
  • Individual Control – 55% – AVERAGE
  • Data Sold – 40% – FAIR
  • Data Safety – 60% – AVERAGE
  • Ads & Tracking – 85% – BEST
  • Parental Consent – 75% – GOOD
  • School Purpose – 88% – BEST

Common Sense provides details about their rating, which I am not outlining here, but as asked in my “Detailed Question,” how do you use these ratings in your vetting process, and which do you value more than others? Does a score of 40% for data sold and 45% for data security negate the overall score of 82%?

This is not easy…

As we try to empower faculty and staff to use tools in their work to improve teaching and learning, along with building efficiencies in school operations, how can we more effectively vet these applications, services, and vendors to ensure that their privacy and data use policies and practices do not put our schools and those that we server at risk both now and in the future?

NOTE: This has been cross-posted within the ATLIS Access Points Community

 

About William Stites

Currently the Director of Technology for Montclair Kimberley Academy, occasional consultant, serial volunteer for ATIS, husband, and father to two crazy kids who make me smile everyday.
This entry was posted in Administration & Management, Cyber Security, Data Management, Schools, Teaching & Learning. Bookmark the permalink.