Last month I devoted this column to the idea of content curation, why it is important, and what the consequences of ignoring it might be. This month, let’s focus on some approaches to getting content curation done. Here are seven approaches to consider.

1. Content culling

Once or twice a year, some organizations put a team of users and managers together to cull, or weed content out of the system, usually out of a sense that the knowledge base is too unwieldy and chaotic. Their goal is to examine each and every knowledge asset and make a determination of what to do with it: eliminate it, keep it, or revise it. If you are going to do some spring cleaning of your content, it would be prudent to develop a set of criteria for judging each asset, similar to the 10 problem areas I presented last month.

The risks of this approach are that examining every asset could take a long time and it is not certain that reviewers will take the appropriate level of care in examining every piece of content in the repository. Since a small team is looking at the content, some unintended bias may encroach into the process. But the biggest risk of all is the tendency to keep an asset in the system if there is any likelihood—however small—that “someone, someday” might use it. If the team cannot stick to some very tough criteria, this approach may not work. Bottom line: Culling, especially if it’s the sole approach, is likely not the best way to curate your content.

2. Experts

Some organizations have a dedicated staff of content curators who “own” specific segments of the knowledge base and manage it, regularly cutting, revising, and adding new content. These content owners are—hopefully—subject matter experts who have total responsibility for the accuracy and usefulness of the content. They are well known to the organization and have significant authority as to what content should, and should not, be included in the knowledge base.

Expert curators push high-value content to users. Using email blasts, newsletters, blogs, wikis, and social media approaches, they try to point out new and interesting content to different segments of the organization. In other words, they are advocates for their content.

The organization puts a great deal of trust into its expert curators. This is a good thing because they presumably have the knowledge and capability to maintain the content at the highest level of quality. The risk of this approach is that the curation process is often top-down. That is, once the curator makes a decision, there might be little pushback from users. This is why a feedback system is critical here.

3. Crowdsourcing

There are many examples, Wikipedia being the most recognized, where groups of people with similar interests create and curate the content. Using wikis and other groupware, they all critique, contribute, edit, and update the content. Ideally, most active group members have both passion and expertise in the topic and the good sense to maintain accuracy and distinguish valuable from trivial content (see approach No. 2, above).

The obvious benefit of this approach is the so-called wisdom of crowds, where the knowledge of lots of dedicated people comes together to improve the content in ways that single individuals, no matter how smart, could never do. The risk, especially in open environments, is that rogue contributors may add false, misleading, or unsubstantiated content. A high level of diligence among the devoted experts in the group can mitigate this.

4. Algorithms

Google, for example, doesn’t cull content, nor does it employ legions of curators to manage it. Instead, it uses a number of highly sophisticated and proprietary algorithms that structure search results based on key criteria, such as the relationship between knowledge assets (how they are linked), the popularity of knowledge assets, the number of times a search term appears in an asset, and so on.

Search for most anything on Google, or other search engines, and you’ll likely come up with hundreds, thousands, or even millions of hits. But how do you know which assets are right for your query? Many of us scan the first few pages of the search result and usually find what we are looking for. These are the algorithms at work, trying to move the best content to the top of the list.

The risks, of course, are that no algorithm is perfect; it might miss the specific asset you are looking for, and one or more outdated or inappropriate assets will slip through. But when you are dealing with literally billions of knowledge assets, these risks are acceptable, while the algorithms are continuously refined to reduce them further.

Of course, the quality of a search result depends on the quality of the initial query. As you get better and better at defining your specific search parameters and language, your results will also get better.

One of the simpler algorithms used in many organizations is the link between users and the assets they need to do their jobs. Matching metadata associated with users to metadata associated with knowledge assets results in parsing the content so that (hopefully) the search results contain only those assets relevant to the task at hand, and to the user performing that task. If asset owners are serious about assigning the right metadata at the time of publication, and if they continually follow the use of that content, this approach tends to help people find only the best, most relevant content, at the moment of need.

5. Analytics and social media

Here, instead of professional curators, users do the job of content curation by “voting” on the quality of the content through usage and feedback. Analytics play a key role, as it is important to capture the data users generate. Users (who could number in the thousands) influence the fate of a knowledge asset in a number of ways:

  1. The system can count the number of times people access and use an asset, although this is not the best measure of value.
  2. Users can recommend or review the value of a knowledge asset. The highest-rated assets flow to the top.
  3. Users can score a knowledge asset using a rating scale. In addition, through social media, it is possible to analyze the number of likes, retweets, shares, etc., associated with an asset.
  4. The system can collect data on how users employ an asset, how often they share it, and how many times they cite it.

Content managers can look at a variety of user data and make determinations of content value and usage. Then they can make decisions as to whether an asset should be retired, be revised, or remain as is. They can also decide to promote some content that appears to be valuable but is relatively unknown, especially to those who need it. The risk in this approach is that most of the data comes from users, and, while that can be quite useful, there are times when users might not be the best judges of the content’s worth. It is often more effective to combine this with professional curator oversight.

The analytics approach is very helpful in fields such as online shopping (like Amazon) and travel (like TripAdvisor).

6. Syndicated content

A very different way for an organization to address content curation is to pass it off to an outside service. Online news aggregators, such as Flipboard, Apple News, Yahoo, and many others, create little if any of their own content. Instead, they use outside news providers in many different fields to provide curated content to their sites. They are letting external content owners curate the content for them.

This is a growing trend within business and other organizations, especially in areas that are not the core focus of the business. Syndicated content providers have emerged in many content areas, including HR, IT, management and leadership, marketing, health and safety, and legal, to name a few. In fact, many of these services further curate the content by industry, so you might have HR for retail establishments, or health and safety for food handlers. This leaves internal content experts to focus on the unique intellectual property of the organization.

The risk here is that you are turning over content curation to a third party. This will require careful monitoring to assure that the content meets the needs of the users, is available when and where they need it, is refreshed regularly, and does not conflict with any internal practice or policy.

7. Us

Finally, after all is said and done, it comes down to us—content users and creators. We are the guardians, the final quality-assurance checkpoint of what we use and create. Whether we do this a lot or just occasionally, it’s vitally important that we have the skills and awareness to discern good content from bad, and that we know how to publish content that meets high curation standards. A role for training?

So, which is best?

There is no single best approach. As I noted, there are a number of ways to look at the content curation job, and each organization must create its own path based on budget, time frames, size of the content repository, the skill sets of available staff, and the relative importance the organization attaches to this type of work. Most likely, a combination of approaches will work best. Some are simple, others are more complex, and many work best with solid technological support. But all of them require diligence and sound professional judgment as key ingredients. It’s what we must bring to the table.

Next month, I’ll look at one of the more unique challenges of content curation: expiration.