Categories
Content Engineering

Metadata for Appreciation and Transparency

Who supports your work? If you work in a non-profit or a university, that’s an important question. These organizations depend on the generosity of others. They should want the world know who is making what they do possible. Fortunately, new standards for metadata will make that happen.

Individuals and teams who work in the non-profit and academic sectors, who either do research or deliver projects, can use online metadata to raise their profiles. Metadata can help online audiences discover information about grants relating to advancing knowledge or helping others. The metadata can reveal who is making grants, who is getting them, and what the grants cover.

Grants Metadata

A new set of metadata terms is pending in the schema.org vocabulary relating to grants and funding. The terms can help individuals and organizations understand the funding associated with research and other kinds of goal-focused projects conducted by academics and non-profits. The funded item (property: fundedItem) could be anything. While it will often be research (a study or a book), or it could be delivery of a service such as training, curriculum development, environmental or historical restoration, inoculations, or conferences and festivals. There is no restriction on what kind of project or activity can be indicated.

The schema.org vocabulary is the most commonly used metadata standard for online information, and is used in Google search results, among other online platforms. So the release of new metadata terms in schema.org can have big implications for how people discover and assess information online.

A quick peek at the code will show how it works. Even if you aren’t familiar with what metadata code looks like, it is easy to understand. This example, from the schema.org website, shows that Caroline B Turner receives funding from the National Science Foundation (grant number 1448821). Congratulations, Dr. Turner! How cool is that?

  1. <script type=“application/ld+json”>
  2. {
  3.   “@context”: “http://schema.org”,
  4.   @type“: “Person“,
  5.   “name”: “Turner, Caroline B.”,
  6.   “givenName”: “Caroline B.”,
  7.   “familyName”: “Turner”,
  8.   “funding”: {
  9.      “@type”: “Grant”,
  10.      “identifier”: “1448821”
  11.      “funder”: {
  12.        “@type”: “Organization”,
  13.        “name”: National Science Foundation“,
  14.        “identifier”: “https://doi.org/10.13039/100000001”
  15.      }
  16.    }
  17. }
  18. </script>

 

The new metadata anticipates diverse scenarios. Funders can give grants to projects, organizations, or individuals. Grants can be monetary, or in-kind. These elements can be combined with other schema.org vocabulary properties to provide information about how much money went to different people and organizations, and what projects they went to.

Showing Appreciation

The first reason to let others know who supports you is to show appreciation. Organizations should want to use the metadata to give recognition to the funder, and encourage their continued future support.

The grants metadata helps people discover what kinds of organizations fund your work. Having funding can bring prestige to an organization. Many organizations are proud to let others know that their work was sponsored by a highly competitive grant. That can bring credibility to their work. As long as the funding organization enjoys a good reputation for being impartial and supporting high quality research, noting the funding organization is a big benefit to both the funder and the grant receiver. Who would want to hide the fact that they received a grant from the MacArthur Foundation, after all?

Appreciation can be expressed for in-kind grants as well. An organization can indicate that a local restaurant is a conference sponsor supplying the coffee and food.

Providing Transparency

The second reason to let others know who supports your work is to provide transparency. For some non-profits, the funding sources are opaque. In this age of widespread distrust, some readers may speculate about the motivations an organization if information about their finances is missing. The existence of dark money and anonymous donors fuels such distrust. A lack of transparency can spark speculations that might not be accurate. Such speculation can be reduced by disclosing the funder of any grants received.

While the funding source alone doesn’t indicate if the data is accurate, it can help others understand the provenience of the data. Corporations may have a self-interest in the results of research, and some foundations may have an explicit mission that could influence the kinds of research outcomes they are willing to sponsor. As foundations move away from unrestricted grants and toward impact investing, providing details about who sponsors your work can help others understand why you are doing specific kinds of projects.

Transparency about funding reduces uncertainty about conflicts of interest. There’s certainly nothing wrong with an organization funding research they hope will result in a certain conclusion. Pharmaceutical companies understandably hope that the new drugs they are developing will show promise in trials. They rely on third-parties to provide an independent review of a topic. Showing the funding relationship is central to convincing readers that the review is truly independent. If a funding relationship is not disclosed but is hidden, readers will doubt the independence of the researcher, and question the credibility of the results.

It’s common practice for researchers to acknowledge any potential conflict of interest, such as having received money from a source that has a vested interested in what is being reported. The principle of transparency applies not only to doctors reporting on medical research, but also to less formal research. Investment research often indicates if the writer has any ownership of stocks he or she is talking about. And news outlets increasingly note when reporting on a company if that company directly or indirectly owns the outlet. When writing about Amazon, The Washington Post will note “Bezos also owns The Washington Post.”

If the writer presents even the appearance that their judgment was influenced by a financial relationship, they should disclose that relationship to readers. Transparency is an expectation of readers, even though publishers are uneven in their application of transparency.

Right now, transparency is hard for readers to crack. Better metadata could help.

Current Problems with Funding Transparency

Transparency matters for any issue that’s subject to debate or verification, or open to interpretation. One such issue I’m familiar with is antitrust — whether certain firms have too much (monopoly) market power. It’s an issue that has been gaining interest across the globe by people holding different political persuasions, but it’s an issue where there is a range of views and cited evidence. Even if you are not be interested in this specific issue, the example of content relating to antitrust illustrates why greater transparency through metadata can be helpful.

A couple of blocks from my home in the Washington DC area is an institution that’s deeply involved in the antitrust policy debate: the Antonin Scalia Law School at George Mason University (GMU), a state-funded university that I financially support as a taxpayer. GMU is perhaps best-known for the pro-market, anti-regulation views of its law and economics faulty. It is the academic home of New York Times columnist Tyler Cowen, and has produced a lot of research and position papers on issues such as copyright, data privacy, and antitrust issues. Last month GMU hosted public hearings for the US Federal Trade Commission (FTC) on the future of antitrust policy.

Earlier this year, GMU faced a transparency controversy. As a state-funded university, it was subject to a Freedom of Information Act (FOIA) request about funding grants it receives. The request revealed that the Charles Koch Foundation had provided an “estimated $50 million” in grants to George Mason University to support their law and economic programs, according to the New York Times. Normally, generosity of that scale would be acknowledged by naming a building after the donor. But in this case the scale of donations only came to light after the FOIA request. Some of this funding entailed conditions that could be seen as compromising the independence of the researchers using the funds.

The New York Times noted that the FOIA also revealed a another huge gift to GMU: “executives of the Federalist Society, a conservative national organization of lawyers, served as agents for a $20 million gift from an anonymous donor.” What’s at issue is not whether political advocacy groups are entitled to provide grants, or whether or not the funded research is valid. What’s problematic is that research funding was not transparent.

Right now, it is difficult for citizens to “follow the money” when it comes to corporate-sponsored research on public policy issues such as the future of antitrust. Corporations are willing to provide funding for research that is sympathetic to their positions, but may not want to draw attention to their funding.

In the US, the EU, and elsewhere, elected officials and government regulators have discussed the possibility of bringing new antitrust investigations against Google. For many years, Google has funded research countering arguments that it should be subject to antitrust regulation. But Google has faced its own controversies about its funding transparency, according to a report from the Google Transparency Project, part of the Campaign for Accountability, which describes itself as “a 501(c)(3) non-profit, nonpartisan watchdog organization.” The report “Google Academics” asserts: “Eric Schmidt, then Google’s chief executive, cited a Google-funded author in written answers to Congress to back his contention that his company wasn’t a monopoly. He didn’t mention Google had paid for the paper.”

Google champions the use of metadata, especially the schema.org vocabulary. As Wikipedia notes, “Google’s mission statement is ‘to organize the world’s information and make it universally accessible and useful.’” I like Google for doing that, and hold them to a high standard for transparency precisely because their mission is making information accessible.

Google provides hundreds research grants to academics and others. How easy it is to know who Google funds? The Google Transparency Project tried to find out who Google funds by using Google Scholar, Google’s online search engine for academic papers. There was no direct way for them to search by funding source.

Searching for grants information without the benefit of metadata is very difficult. Source: Google Transparency Project, “Google Academics” report

They needed to search for phrases such as “grateful to Google.” That’s far short of making information accessible and useful. The funded researchers could express their appreciation more effectively by using metadata to indicate grants funding.

Google Transparency Project produced another report on the antitrust policy hearings that the FTC sponsored at GMU last month. The report, entitled “FTC Tech Hearings Heavily Feature Google-funded Speakers” concludes:“A third of speakers have financial ties to Google, either directly or through their employer. The FTC has not disclosed those ties to attendees.” Many of the speakers Google funded were current or former faculty of GMU, according to the report.

I leave it to the reader to decide if the characterizations of the Google Transparency Project are fair and accurate. Assessing their report requires looking at footnotes and checking original sources. How much easier it would be if all the relevant information were captured in metadata, instead of scattered around in text documents.

Right now it is difficult to use Google Scholar to find out what academic research was funded by any specific company or foundation. I can only hope that funders of research, Google included, will encourage those who receive their grants to reveal that sponsorship within the metadata relating to the research. And that recipients will add funding metadata to their online profiles.

The Future of Grants & Funding Metadata

How might the general public benefit from metadata on grants funding? Individuals may want to know what projects or people a funder supports. They want to see how funding sources have changed over time for an organization.

These questions could be answered by a service such as Google, Bing, or Wolfram Alpha. More skilled users could even design their own query of the metadata by using a SPARQL query (SPARQL is query language for semantic metadata). No doubt many journalists, grants-receiving organizations, and academics will find this information valuable.

Imagine if researchers at taxpayer-supported institutions such as GMU were required to indicate their funding sources within metadata. Or if independent non-profits made it a condition of receiving funding that they indicate the source within metadata. Imagine if the public expected full transparency about funding sources as the norm, rather than as something optional to disclose.

How You can get Involved

If you make or receive grants, you can start using the pending Grants metadata now in anticipation of its formal release. Metadata allows an individual to write information once, and reuse it often. When metadata is used to indicate funding, organizations have less worry about forgetting to mention a relationship in a specific context. The information about the relationship is discoverable online.

Note that the specifics of the grants proposal could change when it is released, though I expect they would most likely be tweaks rather than drastic revisions. Some specific details of the proposal will most interest research scientists who are concerns with research productivity and impact metrics that are of less interest to researchers working in public policy and other areas. While the grants proposal has been under discussion for several years now, the momentum for final release is building and it will hopefully be finalized before long. Many researchers plan to use the newly-released metadata terms for datasets, and want including funder information as part of their dataset metadata. (Sharing research data is often a condition of research grants, so it makes sense to add funding sponsorship to the datasets.)

If you have suggestions or concerns about the proposal, you can contribute your feedback to the schema.org community GitHub issue (no 383) for grants. Schema.org is a W3C community, and is open to contributions from anyone.

— Michael Andrews

Categories
Content Engineering

Auditing Metadata Serialized in JSON-LD

As websites publish more metadata, publishers need ways to audit what they’ve published. This post will look at a tool called jq that can be used to audit metadata.

Metadata code is invisible to audiences. It operates behind the scenes. To find out what metadata exists entails looking the source code, squinting at a jumble of div tags, css, javascript and other stuff. Glancing at the source code is not a very efficient way to see what metadata is included with the content. Publishers need easy ways for their web teams to find out what metadata they’ve published.

This discussion will focus on metadata that’s serialized in the JSON-LD format. One nice thing about JSON-LD is that it separates the metadata from other code, making it easier to locate. For those not familiar with JSON-LD, a brief introduction. JSON-LD is the latest format for encoding web metadata, especially the widely-used schema.org vocabulary. JSON-LD is still less pervasive than microdata and RDFa, which are described within HTML elements. But JSON-LD has quickly emerged as preferred the syntax for many websites. It is more developer-friendly than HTML syntaxes, and shares a common heritage with the widely-used JSON data format.

According to statistics, around 225,000 websites are using JSON-LD. That’s about 21% of all websites globally, and is nearly 30% of English language websites. Some major sites using JSON-LD for metadata include Apple, Booking.com, Ebay, LinkedIn, and Yelp.

Why Audit Metadata?

I’ve previously touched on the value of auditing metadata in my book, Metadata Basics for Web Content. For this discussion, I want to highlight a few specific benefits.

For those who work with SEO, the value of knowing what metadata exists is obvious: it influences discovery through search. But content creators will also want to know the metadata profile of their content. It can yield important insights useful for editorial planning.

Metadata provides a useful summary of the key information within published content. Reviewing metadata can provide a quick synopsis of what the content is about. At the same time, if metadata is missing, that means that machines can’t find the key information that audiences will want to know when viewing the content.

Auditing can reveal:

  • what key information is included in the content
  • if any important properties are missing that should be included

Online publishers should routinely audit their own metadata. And they may decide they’d benefit by auditing their competitor’s metadata as well. Generally, the more detailed and complete the metadata is, the more likely a publisher will be successful with their content. So seeing how well one’s own metadata compares with one’s competitors can reveal important insights into how readily audiences can access information.

How to Audit JSON-LD metadata

Metadata is code, written for machines. So how can members of web teams, whether writers or SEO specialists, get a quick sense of what metadata they have currently? Since I have mission to evangelize the benefits of metadata to all content stakeholders, including less technical ones, I’ve been looking for light-weight ways to help all kinds of people discover what metadata they have.

For metadata encoded in HTML tags, the simplest way to explore it is using XPath, a simple filter query that searches down the DOM tree to find the relevant part containing the metadata. XPath is not too hard to learn (at least for basic needs), and is available within common tools such as Google Sheets.

Unfortunately, XPath can’t be used for metadata in JSON-LD. But happily, there is an equivalent to XPath that can be used to query JSON-based metadata. It is called jq.

The first step to doing an audit is to extract the JSON-LD from the website you want to audit. It lives within the element <script type= application/ld+json></script>. Even if you need to manually extract the JSON-LD, it is easy to find in the source code (use CTR-F and search for ld+json). Be aware that there may be more than one JSON-LD metadata statement available. For example, when looking at the source code of a webpage on Apple’s website, I notice three JSON-LD script elements representing three different statements: one covering product information (Offer), one covering the company (Organization), and another covering the website structure (BreadcrumbList). Some automated tools have been known to stop harvesting JSON-LD statements after finding the first one, so make sure you get them all, especially the ones with information unique to the webpage.

Once you have collected the JSON-LD statements, you can begin to audit them to see what information they contain. Much like a content audit, you can set up a spreadsheet to track metadata for specific URLs.

Exploring JSON-LD with jq

jq is a “command line” application, which can present a hurdle for non-developers. But an online version of it exists called jq Play that is easy to use.

Although jq was designed for filtering ordinary plain JSON, it can also be used for JSON-LD. Just paste your JSON-LD statement in jq Play, and add a filter.

Let’s look at some simple filters that can identify important information in JSON-LD statements.

The first filter can tell us what properties are mentioned in the metadata. We can find that out using the “keys” filter. Type keys and you will get a list of properties at the highest level of the tree. Some of these have an @ symbol, indicating the are structural properties (for example "@context", "@id", "@type"). Don’t worry about those for now. Others will resemble words and be more understandable, for example, “contactPoint”, “logo”, “name”, “sameAs”, and “url”. These keys, from Apple’s Organization statement, tells us the kinds of information Apple includes about itself on its website.

JSON-LD statements on Apple.com

Let’s suppose we have JSON-LD for an event. An event has many different kinds of entities associated with it, such as a location, the event’s name, and the performer. It would be nice to know what entities are mentioned in the metadata. All kinds of entities use a common property: name. Filtering on the name property can let us know what entities are mentioned in the metadata.

Using jq, we find out entities by using the filter ..|.name? which provides a list of names. When applied to a JSON-LD code sample from the schema.org website, we get the names associated with the Event: the name of the orchestra, the auditorium, the conductor, and the two symphonic works.

The filter was constructed using a pattern ..|.foo? (foo is a jibberish name to indicate any property you want to filter on.) JSON-LD stores information in a tree that may be deeply nested: entities can refer to other entities. The pattern lets the filtering move through the tree and keep looking for potential matches.

results from jq play when filtering by name

Finally, let’s make use of the structural information encoded with the @ symbol. Because lots of different entities have names, we also want to know the type of entity something is. Is the “Chicago Symphony” the name of a symphonic work, or the name of an orchestra? In JSON-LD, the type of entity is indicated with the @type property. We can use jq to find what types of entities are include in the metadata. To do this, the filter would be ..|."@type"? . It follows the same ..|.foo? pattern, except that structural properties that have a @ prefix need to be within quotes, because ordinary JSON doesn’t use the @ prefix and so jq doesn’t recognize it unless it’s in quotes.

When we use this filter for an Event, we learn that the statement covers the following types of entities:

  • “MusicEvent”
  • “MusicVenue”
  • “Offer”
  • “MusicGroup”
  • “Person”
  • “CreativeWork”

That one simple query reveals a lot about what is included. We can confirm that the star of the show (type Person) is included in the metadata. If not, we know to add the name of the conductor.

Explore Further

I’m unable here to go into the details of how JSON-LD and schema.org metadata statements are constructed — though I do cover these basics in my book. To use jq in an audit, you will need some basic knowledge of important schema.org entities and properties, and know how JSON-LD creates objects (the curly braces) and lists (the brackets). If you don’t know these things yet, they can be learned easily.

The patterns in jq can be sophisticated, but at times, they can be fussy to wrangle. JSON-LD statements are frequently richer and more complex than simple statements in plain JSON. If you want to extract some specific information within JSON-LD, don’t hesitate to ask a friendly developer to help you set up a filter. Once you have the pattern, you can reuse it to retrieve similar information.

JSON-LD is still fairly new. Hopefully, purpose-built tools will emerge to help with auditing JSON-LD metadata. Until then, jq provides a light weight option for exploring JSON-LD statements.

— Michael Andrews