When Platform Coops are Seen, What Goes Unseen?

If you’re involved with the “cooperative community” on social media, you’ve probably heard a lot about platform cooperatives in recent years. The vision is simple: what if Uber or AirBnb were owned by its users, who could share decision-making responsibility and profits among themselves? Instead of being exploited by platforms, users could and should be running them. Just like cooperative supermarkets, these “platform co-ops” could market themselves as democratic alternatives to the venture-backed “Death Star” platforms coming out of Silicon Valley.

While I certainly agree we need to see new organizational forms take on the dominant venture-backed startup model, platform cooperatives have yet to prove that they’re up to the task. In fact, there are so few financially sustainable platform cooperatives in existence that, when Shareable magazine tried to list them in their article “11 Platform Cooperatives Creating a Real Sharing Economy,”, it had to include businesses that don’t sell any products or services yet, businesses that aren’t cooperatives, and businesses that aren’t platforms. Some people complained about the exaggerated tone of the article in the comment section, so Sharable added a disclaimer at the bottom of the story.

The fact remains that, despite two years and two high profile conferences in support of the concept, you can count the amount of genuinely successful platform cooperatives on one hand. And it’s not like this is a radically new concept that people have to wrap their heads around. Cooperatives are a very popular and proven business structure.

Despite platform cooperativism’s modest gains, I do see the concept’s value. Its existence pressures successful online platforms to share some of their profits with their users, and invites entrepreneurs who want to create new platforms to try out a new organizational structure. I worry, however, that the cooperative community only has a limited amount of cognitive capacity it can use to process information technology innovation, and the fantasy of platform cooperativism is taking up space that could be better used by promoting and applying open source, open data, and peer-production principles to overcome some of the cooperative movement’s most pressing challenges. Instead of spilling lots of ink dreaming about how technology companies could be cooperatives, the “cooperative community” should be asking how cooperatives can benefit from technology development models that have a proven track record of success.

The two models I wish were being more widely discussed in the cooperative community are open source technology and open data practices.

Open source software and the peer-production process it has spawned have been wildly successful at challenging conventional software technology business models. In 2001, Steve Ballmer of Microsoft called Linux, which is the world’s most used open source software project, “a cancer.” A decade later, Microsoft was in the top top five corporations contributing to Linux. Google’s core operating systems, ChromeOS and Android, both run on Linux, and so do emerging competitors, many out of Asia, that are leveraging Android’s open source core to compete directly with the Google in the smartphone market. That is just one of a myriad of open source success stories that include WordPress, Firefox, Wikipedia, and so much more.

Corporations are adopting open source and other peer-production processes such as open data, open knowledge and open hardware like wildfire—not because they want to share, but because they want to make money. Meanwhile, cooperatives are expected to follow a set of principles, one of which is “cooperation among cooperatives,” and yet their adoption of open source and open data within the cooperative community is minimal. Evidence of the cooperative community not adopting open approaches and following principle six include:

  • Research reports from cooperative support organization often have restrictive copyrights them instead of open, permissible, Creative Commons ones.
  • Research data is locked away in PDFs instead of being made available in open data portals.
  • Information about cooperative networks and membership organizations is often organized in proprietary data models instead of open ones, and not made openly available in bulk using open data formats.
  • Cooperatives are often structured hierarchically like banks instead of horizontally like open source projects.
  • There still isn’t a searchable online directory of cooperatives in the United States, much less an open data compliant one.

All of the above problems could be resolved if the cooperative movement followed best practices emerging from the unfashionable but very useful open source, open data, free culture and open access, and peer-to-peer movements. These practices have proven track records for enabling highly productive, widespread collaborations among many different types of stakeholder groups. One thing they very rarely do is organize themselves as cooperatives. Instead, open source projects tend to use for-profit, nonprofit and unincorporated entities.

We tend to view platform cooperativism as a vision that has yet to be realized, but it could just as easily be viewed as a potential future that never came. Cooperative organizational structures are not new. They have impacted a myriad of giant industries including food and agriculture, electricity and real estate. So why haven’t cooperatives been successful at software development? The answer to this question could be a key to moving platform cooperativism forward.

Introducing Data Models for Human(itarian) Services

This was originally posted at Sarapis

Immediately after a disaster, information managers collect information about who is doing what, where, and turn it into “3W Reports.” While some groups have custom software for collecting this information, the most popular software tool for this work is the spreadsheet. Indeed, the spreadsheet is still the “lingua franca” of the humanitarian aid community, which is why UNOCHA’s Humanitarian Data Exchange project is designed to support people using this popular software tool.

After those critical first few days, nonprofits and government agencies often transition their efforts from ad hoc emergency relief and begin to provide more consistent “services” to the affected population.

The challenge of organizing this type of “humanitarian/human services” information is a bit different than the challenges associated with disaster-related 3W reports, and similar to the work being done by people who manage and maintain persistent nonprofit services directories. In the US, these types of providers are often called “211” because you can dial “211” in many communities in the US to be connected to a call center with access to a directory of local nonprofit service information.

During the ongoing migrant crisis facing Europe, a number of volunteer technical communities (VTCs)  in the Digital Humanitarian Network engaged in the work of managing data about these humanitarian services. They quickly realized they needed to come up with a shared template for this information so they could more easily merge data with their peers, and also so that during the next disaster, they didn’t have to reinvent the wheel all over again.

Since spreadsheets are the most popular information management tool, the group decided to focus on creating a standard set of column headers for spreadsheets with the following criteria:

To create this shared data model, we analyzed a number of existing service data models, including:

  • Stand By Task Force’s services spreadsheet
  • Advisor.UNHCR services directory
  • Open Referral Human Service Data Standard (HSDS)

The first two data models came from the humanitarian sector and were relatively simple and easy to analyze. The third, Open Referral, comes from a US-based nonprofit service directory project that did not assume that spreadsheets would be an important medium for sharing and viewing data.

To effectively incorporate Open Referral into our analysis, we had to convert it into something that could be viewed in a single sheet of a spreadsheet (we call it “flat”). During the process we also made it compliant with the Humanitarian Exchange Language (HXL), which will enable Open Referral to collaborate more with the international humanitarian aid community on data standards work. Check out the Open Referral HSDS_flat sheet to see the work product.

We’re excited about the possibility that Open Referral will take this “flat” version under their wing and maintain it going forward.

Once we had a flat version of Open Referral, we could do some basic analysis of the three models to create a shared data model. You can learn about our process in our post “10 Steps to Create a Shared Data Model with Spreadsheets.”

The results of that work is what we’re calling the Humanitarian Service Data Model (HSDM). The following documents and resources (hopefully) make it useful to you and your organizations.

We hope the HSDM will be used by the various stakeholders who were involved in the process of making it, as well as other groups that routinely manage this type of data, such as:

  • member organizations of the Digital Humanitarian Network
  • grassroots groups that come together to collate information after disasters
  • big institutions like UNOCHA who maintain services datasets
  • software developers who make apps to organize and display service information

I hope that the community that came together to create the HSDM will continue to work together to create a taxonomy for #service+type (what the service does) and #service+eligibility (who the service is for). If and when that work is completed, digital humanitarians will be able to more easily create and share critical information about services available to people in need.

* Photo credits: John Englart (Takver)/Flickr CC-by-SA

Creating a Shared Data Model with a Spreadsheet

Over the last year, a number of clients have tasked me with bringing datasets from many different sources together. It seems many people and groups want to work more closely with their peers to  not only share and merge data resources, but to also work with them to arrive at a “shared data model” that they can all use to manage data in compatible ways going forward.

Since spreadsheets are, by far, the most popular data collection and management tool, using spreadsheets for this type of work is a no-brainer.

After doing this task a few times, I’ve gotten confident enough to document my process for taking a bunch of different spreadsheet data models and turning them in a single shared one.

Here is the 10-step process:

  1. Create a spreadsheet. First column is for field labels. You can add additional columns for other information you’d like to analyze about the field such as its data type, database name and/or reference taxonomies (i.e. HXL Tag).
  2. Place the names of the data models you’ve selected to analyze in the column headers to the right of the field labels.
  3. List all the fields of the longest data model on the left side of the sheet under the “Field Label” heading.
  4. Place an “x” in the cells of the data model that contain the field to indicate it contains all the fields documented in the left hand column.

    How to Create a Shared Data Model with Spreadsheets
    This is a sheet comparing three different data models with a set of field labels and a “taxonomy convention”.
  5. Working left to right, place an  “x” to indicate when a data model has a field label contained therein. If the data model has that field but uses a different label, place that label in the cell(4a). If it doesn’t have that field, leave the cell blank. Add any additional fields not in the first data model to the bottom of the Field Labels column (4b).

  6. Do the same thing for the next data models.
  7. Once you have all the data models documented in this way, then you can look and see what the most popular fields are by seeing which have the most “x”s. Drag those rows to the top, so the most popular fields are on the top, and the least popular fields are on the bottom. I like to color code them, so the most popular fields are one color (green), the moderately popular ones are another (yellow) and the least popular but still repeated fields are another (red).
  8. Once you have done all this, you should present it to your stakeholder community and ask them for feedback. Some good questions are: (a) If our data model were just the colored fields, would that be sufficient? Why or why not? What fields should we add or subtract? (b) Data model #1 uses label x for a field while data model #2 uses label y. What label should we use for this and why?

    How to Create a Shared Data Model with Spreadsheets (1)
    Give people a “template” they can use to actually manage their data.
  9. Once people start engaging with these questions, layout the emerging data model in a new sheet, horizontally in the first row. Call this sheet a “draft template”. Bring the color coding with it to make it easier for people to recognize that the models are the same. As people give feedback, make the changes to the “template” sheet while leaving the “comparison” sheet as a reference. Encourage people to make their comment directly in the cell they’re referencing.
  10. Once all comments have been addresses and everyone is feeling good about the template sheet, announce that sheet is the “official proposal” of a shared data model/standard. Give people a deadline to make their comments and requests for changes. If no comments/changes are requested – congratulations: you have created a shared data model! Good luck getting people to use it. 😉

Do you find yourself creating shared data models? Do you have other processes for making them? Did you try out this process and have some feedback? Is this documentation clear? Tell me what you’re thinking in the comments below.

Preparing for the Worst, Hoping for the Best: Data Standards, Superstorm Sandy, and Our Resilient Future

Originally posted at OpenReferral

In the wake of Superstorm Sandy, many residents of New York City were left struggling.

Occupy Sandy Relief Effort at St. Matthew St. Luke Episcopal

Though a broad array of supportive services were available to survivors — from home rebuilding funds to mental health treatment — it’s often hard for people to know what’s available and how to access it. New York City lacks any kind of centralized system of information about non-profit health and human services. Given the centrality of non-profit organizations in disaster relief and recovery in the United States, this information scarcity means that for many NYC residents recovery from Sandy never quite happened.

As in any federally-declared emergency scenario, every officially-designated disaster case management program was mandated to use the same information system — the Coordinated Access Network (CAN.org) — to manage survivors’ access to benefits and other steps along the path to recovery. CAN has its own resource directory system, but it is proprietary and not available to the public; survivors often need to make a phone call to a case manager to get even the most basic information about the services. In conversations with those case managers who have had the privilege of being able to access this resource, we’ve heard that its interface is confusing and its data is often duplicated and outdated.

As a result, most disaster case management agencies ended up managing their own resource directories — in isolation from each other. Some organizations were able to cobble together relatively comprehensive service directories, but others don’t have any, and rely on individual case managers to solve the problem themselves. Now, just a bit over two years after the storm, the funding for these disaster case management programs is coming to a close — and so the local, personal knowledge about Sandy recovery services held by these social workers will disappear.

Yet the need remains great. Less than 3% of houses that applied to be rebuilt after Sandy have been completed – and people involved know that this may be a decade-long process for thousands of New Yorkers. The organizations that will serve them will be local, under-funded or entirely unfunded, and organized through a volunteer-based ‘long-term recovery organizations’.

sarapis-logo-red-300x64Our organization, Sarapis, has been providing free/libre/open-source software solutions to grassroots groups and long term recovery coalitions since the storm first hit New York City in October 2012. Through our community technology initiative, NYC:Prepared, we’ve been helping community-based recovery groups make information about critical services accessible to the public.  We’ve aggregated what may be the most comprehensive and searchable directory of services for Sandy victims in NYC on the web (a scary thought, considering our organization’s tiny budget).

NYC:Prepared's Post-Sandy Recovery Resource directory can be embedded within the websites operated by NYC's volunteer disaster recovery networks.

The data in our directory comes from a hodgepodge of sources: nonprofit websites, PDF printout, shared spreadsheets created by long term recovery group members, and .CSVs produced by individual case managers passionate about sharing resources. Initially, we used Google Spreadsheet and Fusion tables to manage all of this.

With the introduction of the Human Services Data Specification (HSDS), through the Open Referral initiative, we’re now able to manage this information using a standardized, well documented format that others can also use and share. And that’s precisely what we try to encourage others to do.

Openly accessible, standardized human service directory data is critical for each of the phases of a disaster. For disaster preparedness, service information can help identify gaps in the allocation of resources that communities might need during a disaster. For disaster response, many different kinds of organizations and service providers need simultaneous access to the same information. For disaster recovery, survivors need an array of services to get back on their feet, and they should be able to find this information in a variety of ways.

With the Ohana API, we can glimpse a world in which all of the needs above can be met. So we’ve deployed a demonstration implementation of Ohana at http://services.nycprepared.org. In Ohana, we now have a lightweight admin interface for organizing our data and a front-end application to serve it to the public in a beautiful and mobile friendly way.  Since Ohana is an API, other developers can use it to make whatever interfaces they please.

sahanalogoWhile we’re quite impressed with the Ohana product, its out-of-the-box web search interface won’t meet everyone’s needs. The system that we’d most like to use would be our open source disaster management software called Sahana. Sahana is the world’s leading open source resource management software and we want to build a component — available to any community — that will enable it to consume, produce and deliver HSDS-compatible resource directory data.

By making it possible for any agency using Sahana-based systems to consume and publish resource directory data in the Open Referral format, we can shift the entire field of relief and recovery agencies towards more interoperable, sustainable, and reliable practices. Sahana specialists are ready to develop this open source, HSDS-compatible resource directory component — at an estimated cost of $5,000.  Please consider donating to our effort. And please reach out to Sarapis if you know of other communities and use cases in which this technology could enhance resilience in the face of crisis.