This article was originally published on Gotham Gazette on November 30, 2018
Smartphones are transforming transit in cities all over the world, and city governments are struggling to figure out how to best manage the change. If the world was looking to New York City’s recently enacted legislation affecting for-hire vehicle companies, then there will be disappointment given that, once again, the city’s political establishment decided to impose an outdated regulatory regime on innovative firms, making life harder for thousands of new taxi drivers while raising the price of rides for millions of New Yorkers and visitors to the city. The law, enacted this summer, caps the number of e-hail licenses in the city for a year and also enables the city to impose regulations on the type of compensation structures offered to drivers.
Who benefits? Politicians argue that it’s existing drivers who received their taxi registration before the one-year moratorium on new licenses was implemented, but if you think they’re the primary beneficiary then there’s a bridge in Brooklyn I’d like to sell you.
In reality, politicians got behind this legislation because they want to send a message to Silicon Valley, the startup community and their financiers: If you want access to the 8-plus million person New York City market, you’ll have to go through the local political class first, and that will cost you: in form of taxes, campaign contributions, lobbyists, and more.
True to form, the left and right have staked out their normal positions on this issue. For the left, it’s all about protecting the wages and rights of the less-than-10,000 existing drivers, even if that means higher costs for all New Yorkers and more obstacles for people who want to earn money by driving a car. For the right, it’s about protecting businesses and drivers from regulatory controls that will raise prices for consumers, even if that means facilitating the big business takeover of an industry that has been a source of wealth for independent individuals and small businesses in New York City for a century.
Like many issues involving new technology, we need to look beyond the left-wing or right-wing way to manage these technologies, and instead look to the “open source way.”
What do we want? Safe, convenient rides, with low prices for riders, high income for drivers, positive impacts on traffic, and data protection for everyone involved.
The best way to achieve these ends isn’t complex licensure regimes, quotas on new taxis, or putting more surveillance technologies in our cars or on our streets. Instead, New York City should do for its local cab industry the same thing successful industries do for themselves: standardize how information is formatted and exchanged between systems. This makes it possible for information from one app, like Uber, to be read, understood and interacted with by another app, like Lyft or Google Maps.
Making ride-hailing data more standardized and interoperable will have a number of benefits.
First, it aggregates supply and demand, which increases competition in the taxi market leading to lower prices for riders and more business for drivers.
Second, it gives riders and drivers more options, allowing them to use an app with the mission of benefiting New Yorkers instead of benefiting investors in giant tech corporations.
Third, it mitigates a threat many people fear: that Uber, Lyft, and other venture-backed ride-sharing apps are subsidizing their own cab rides to undermine the legacy taxi industry, and then once the legacy industry is dead, they’ll jack up prices. That strategy won’t work if New York City is committed to maintaining a system of its own.
The idea of establishing a “ride sharing” (or “e-hail”) standard isn’t new. It has been discussed and proposed by a number of people in New York City’s tech community for years, including Ben Kallos, a tech-aware City Council member who proposed it in a 2014 bill, and by Chris Whong, now the lead developer of NYC Planning Labs, who proposed it in a 2013 blog post.
Critics of this approach have claimed that the city doesn’t have the capacity to develop its own e-hailing systems, but that simply isn’t true. Generic apps similar to Lyft and Uber exist in hundreds of markets around the world. Even local cab companies in New York City have developed their own apps.
Creating an e-hailing system for New York City would likely involve a three-step process: (a) develop a “ride sharing data standards” body that would bring riders, drivers, city agencies, and app developers together to create specifications for how all taxi-hailing information should be formatted and exchanged; (b) develop and operate a basic, open source e-hail smartphone application that would use these data standards to, like any one of the dozens of ride-hailing apps available around the world, allow New Yorkers to request rides and drivers to fulfill those requests; and (c) create a city-administered server that not only processes information from the current city taxi app but also allows other ride-sharing apps to exchange their information with the server.
This approach would give Uber, Lyft, and other popular apps a choice: they can plug in to the city’s e-hail exchange server and share their rider and driver information with other apps – or go it alone and face the consequences of having less access to rider and driver information than their competitors.
This approach leverages the city’s considerable influence to produce a number of benefits:
By following established best practices from government digital service organizations and open source communities, this system could be produced quickly and inexpensively. And by open-sourcing an app and inviting other cities to use and modify the New York City code, we could join a small but growing community of cities around the world developing and sharing open source software (such as Madrid’s Consul project) that enables them to provide government services faster, better, cheaper, and in a more ethical manner.
The original meaning of “regulation” wasn’t the levying of taxes and fees to penalize innovation — it was to “make regular” through the implementation of transparent business practices and the adoption of standard operating procedures. That is precisely what New York City should be doing, and it can do so by modelling best practice behavior that challenges Silicon Valley (and its New York-based counterparts) to produce better products, for lower prices, in more responsible ways, with more respect for the rights of their users.
Any municipality can throw rocks at Silicon Valley by imposing taxes and creating obstacles to market entry, but few have the capacity and scale to challenge Silicon Valley by creating innovative products. New York City has that ability. Let’s use it.
Devin Balkind is a technologist and nonprofit executive who works on civic technology projects in New York City. On Twitter @DevinBalkind.
Photo: Ed Reed/Mayor’s Office
This piece was originally published on Gotham Gazette on October 3, 2017
Over the last few weeks, New Yorkers have watched with great anxiety as Texas, Florida and Puerto Rico, among many other places, were pummeled by massive hurricanes. Whenever we see storm destruction, memories of Sandy re-enter our consciousness; as does the question: Is New York City significantly better prepared for the next big one? My answer is “No.”
As a technology professional in disaster management, I’m constantly on the lookout for better ways to use software tools and information management practices to improve a city’s resilience. With new technologies coming out all the time, there are many pathways for improvement, and selecting the right place to focus preparedness efforts is never easy. In New York City’s case, however, it’s pretty simple: one of the most impactful things we could do, and certainly the lowest hanging fruit, is to build a canonical directory of all the health, human, and social services available in New York City so people know where to go to get the services they need before, during, and after a disaster.
The directory system I’m proposing is often called a “211 system.” In almost every major U.S. city and in over 90% of counties, if you call 2-1-1, you’re connected to a directory assistance representative that can refer you to the health and social services that meet your needs. If you call 2-1-1 in New York City, you’re connected to our 311 system — which is good at providing basic information about government services, but isn’t able to refer you to the vast majority of nonprofit services available in the city.
211 systems are essential infrastructure for any coherent social safety net. Indeed, without them we don’t even know what the social safety net looks like! These systems enable people to find a huge array of help for a broad collection of things, including: housing, employment, food, children’s services, domestic violence counseling, and so much more.
Without a 211, social workers are left to solve this information problem on their own. Many create their own lists on paper and in Word documents that they share with each other. Some organizations maintain resource directories for certain kinds of people or neighborhoods. Well-funded institutions even pay for-profit companies to find this information and provide it to their clientele.
Our lack of a real 211 system is a hindrance to every nonprofit and government service provider, and an embarrassment to every politician who claims to care about New Yorkers in need. If they really cared, wouldn’t they make sure it was possible for every New Yorker to actually find the services they’re entitled to receive?
Prosperous and powerful New Yorkers tend to be unaware that the city lacks a 211 system because they rarely, if ever, use nonprofit social services. But when a disaster like Sandy happens, many people who never before needed access to nonprofit services suddenly do. Because of this dynamic, 211 systems serve extremely important functions during disaster recovery by providing a canonical sources of information about services for survivors. They also tend to become the centers that convene and facilitate collaboration between government agencies, nonprofits and community groups.
211 systems in New Jersey and Long Island played this role after Sandy, and by most accounts their recoveries went much smoother than New York City’s. In New York City, no local entity took responsibility for organizing all the nonprofit service information, which led to a massive coordination crisis. Things got so bad that some intrepid FEMA staff created a 211-style services directory themselves, even though it was so far outside their traditional responsibilities that they had to pretend that other organizations had created it out of fear of political backlash. To this day, no one in city government or the nonprofit establishment has taken responsibility for these coordination failures. Nor has any agency or organization taken responsibility for ensuring that it never happens again.
While incremental improvements in disaster management and recovery processes have certainly been adopted over the last five years, one of the most important Sandy lessons is that New York City desperately needs a fully-funded and well-functioning 211 system. Until we have one, New York City cannot claim to be following even the most basic best practices in disaster preparedness.
Devin Balkind is a candidate for New York City Public Advocate. He is also the President of the Sahana Software Foundation, a nonprofit organization that produces the world’s most popular open source software platform for disaster management. On Twitter @DevinBalkind.
This piece was originally published on Gotham Gazette on October 3, 2017
Photo: After Sandy (photo: Ed Reed/Mayor’s Office)
If you’re involved with the “cooperative community” on social media, you’ve probably heard a lot about platform cooperatives in recent years. The vision is simple: what if Uber or AirBnb were owned by its users, who could share decision-making responsibility and profits among themselves? Instead of being exploited by platforms, users could and should be running them. Just like cooperative supermarkets, these “platform co-ops” could market themselves as democratic alternatives to the venture-backed “Death Star” platforms coming out of Silicon Valley.
While I certainly agree we need to see new organizational forms take on the dominant venture-backed startup model, platform cooperatives have yet to prove that they’re up to the task. In fact, there are so few financially sustainable platform cooperatives in existence that, when Shareable magazine tried to list them in their article “11 Platform Cooperatives Creating a Real Sharing Economy,”, it had to include businesses that don’t sell any products or services yet, businesses that aren’t cooperatives, and businesses that aren’t platforms. Some people complained about the exaggerated tone of the article in the comment section, so Sharable added a disclaimer at the bottom of the story.
The fact remains that, despite two years and two high profile conferences in support of the concept, you can count the amount of genuinely successful platform cooperatives on one hand. And it’s not like this is a radically new concept that people have to wrap their heads around. Cooperatives are a very popular and proven business structure.
Despite platform cooperativism’s modest gains, I do see the concept’s value. Its existence pressures successful online platforms to share some of their profits with their users, and invites entrepreneurs who want to create new platforms to try out a new organizational structure. I worry, however, that the cooperative community only has a limited amount of cognitive capacity it can use to process information technology innovation, and the fantasy of platform cooperativism is taking up space that could be better used by promoting and applying open source, open data, and peer-production principles to overcome some of the cooperative movement’s most pressing challenges. Instead of spilling lots of ink dreaming about how technology companies could be cooperatives, the “cooperative community” should be asking how cooperatives can benefit from technology development models that have a proven track record of success.
The two models I wish were being more widely discussed in the cooperative community are open source technology and open data practices.
Open source software and the peer-production process it has spawned have been wildly successful at challenging conventional software technology business models. In 2001, Steve Ballmer of Microsoft called Linux, which is the world’s most used open source software project, “a cancer.” A decade later, Microsoft was in the top top five corporations contributing to Linux. Google’s core operating systems, ChromeOS and Android, both run on Linux, and so do emerging competitors, many out of Asia, that are leveraging Android’s open source core to compete directly with the Google in the smartphone market. That is just one of a myriad of open source success stories that include WordPress, Firefox, Wikipedia, and so much more.
Corporations are adopting open source and other peer-production processes such as open data, open knowledge and open hardware like wildfire—not because they want to share, but because they want to make money. Meanwhile, cooperatives are expected to follow a set of principles, one of which is “cooperation among cooperatives,” and yet their adoption of open source and open data within the cooperative community is minimal. Evidence of the cooperative community not adopting open approaches and following principle six include:
- Research reports from cooperative support organization often have restrictive copyrights them instead of open, permissible, Creative Commons ones.
- Research data is locked away in PDFs instead of being made available in open data portals.
- Information about cooperative networks and membership organizations is often organized in proprietary data models instead of open ones, and not made openly available in bulk using open data formats.
- Cooperatives are often structured hierarchically like banks instead of horizontally like open source projects.
- There still isn’t a searchable online directory of cooperatives in the United States, much less an open data compliant one.
All of the above problems could be resolved if the cooperative movement followed best practices emerging from the unfashionable but very useful open source, open data, free culture and open access, and peer-to-peer movements. These practices have proven track records for enabling highly productive, widespread collaborations among many different types of stakeholder groups. One thing they very rarely do is organize themselves as cooperatives. Instead, open source projects tend to use for-profit, nonprofit and unincorporated entities.
We tend to view platform cooperativism as a vision that has yet to be realized, but it could just as easily be viewed as a potential future that never came. Cooperative organizational structures are not new. They have impacted a myriad of giant industries including food and agriculture, electricity and real estate. So why haven’t cooperatives been successful at software development? The answer to this question could be a key to moving platform cooperativism forward.
This was originally posted at Sarapis
Immediately after a disaster, information managers collect information about who is doing what, where, and turn it into “3W Reports.” While some groups have custom software for collecting this information, the most popular software tool for this work is the spreadsheet. Indeed, the spreadsheet is still the “lingua franca” of the humanitarian aid community, which is why UNOCHA’s Humanitarian Data Exchange project is designed to support people using this popular software tool.
After those critical first few days, nonprofits and government agencies often transition their efforts from ad hoc emergency relief and begin to provide more consistent “services” to the affected population.
The challenge of organizing this type of “humanitarian/human services” information is a bit different than the challenges associated with disaster-related 3W reports, and similar to the work being done by people who manage and maintain persistent nonprofit services directories. In the US, these types of providers are often called “211” because you can dial “211” in many communities in the US to be connected to a call center with access to a directory of local nonprofit service information.
During the ongoing migrant crisis facing Europe, a number of volunteer technical communities (VTCs) in the Digital Humanitarian Network engaged in the work of managing data about these humanitarian services. They quickly realized they needed to come up with a shared template for this information so they could more easily merge data with their peers, and also so that during the next disaster, they didn’t have to reinvent the wheel all over again.
Since spreadsheets are the most popular information management tool, the group decided to focus on creating a standard set of column headers for spreadsheets with the following criteria:
- Fewest fields possible
- HXL compliant (learn more about HXL)
To create this shared data model, we analyzed a number of existing service data models, including:
- Stand By Task Force’s services spreadsheet
- Advisor.UNHCR services directory
- Open Referral Human Service Data Standard (HSDS)
The first two data models came from the humanitarian sector and were relatively simple and easy to analyze. The third, Open Referral, comes from a US-based nonprofit service directory project that did not assume that spreadsheets would be an important medium for sharing and viewing data.
To effectively incorporate Open Referral into our analysis, we had to convert it into something that could be viewed in a single sheet of a spreadsheet (we call it “flat”). During the process we also made it compliant with the Humanitarian Exchange Language (HXL), which will enable Open Referral to collaborate more with the international humanitarian aid community on data standards work. Check out the Open Referral HSDS_flat sheet to see the work product.
We’re excited about the possibility that Open Referral will take this “flat” version under their wing and maintain it going forward.
Once we had a flat version of Open Referral, we could do some basic analysis of the three models to create a shared data model. You can learn about our process in our post “10 Steps to Create a Shared Data Model with Spreadsheets.”
The results of that work is what we’re calling the Humanitarian Service Data Model (HSDM). The following documents and resources (hopefully) make it useful to you and your organizations.
- HSDM Template – use this to collect data using the HSDM format
- HSDM Working Document – this shows you the work we did to arrive at the HSDM
- HSDM Index Document – this document has more information and additional links about HSDM.
- Humanitarian Data Standards Google Group – discuss and get updates on the HSDM and other data initiatives. Send feedback to this Google Group!
- Data Standards on ResilienceColab – news, directories and other information useful for human and humanitarian data standards initiatives.
We hope the HSDM will be used by the various stakeholders who were involved in the process of making it, as well as other groups that routinely manage this type of data, such as:
- member organizations of the Digital Humanitarian Network
- grassroots groups that come together to collate information after disasters
- big institutions like UNOCHA who maintain services datasets
- software developers who make apps to organize and display service information
I hope that the community that came together to create the HSDM will continue to work together to create a taxonomy for #service+type (what the service does) and #service+eligibility (who the service is for). If and when that work is completed, digital humanitarians will be able to more easily create and share critical information about services available to people in need.
* Photo credits: John Englart (Takver)/Flickr CC-by-SA
Over the last year, a number of clients have tasked me with bringing datasets from many different sources together. It seems many people and groups want to work more closely with their peers to not only share and merge data resources, but to also work with them to arrive at a “shared data model” that they can all use to manage data in compatible ways going forward.
Since spreadsheets are, by far, the most popular data collection and management tool, using spreadsheets for this type of work is a no-brainer.
After doing this task a few times, I’ve gotten confident enough to document my process for taking a bunch of different spreadsheet data models and turning them in a single shared one.
Here is the 10-step process:
- Create a spreadsheet. First column is for field labels. You can add additional columns for other information you’d like to analyze about the field such as its data type, database name and/or reference taxonomies (i.e. HXL Tag).
- Place the names of the data models you’ve selected to analyze in the column headers to the right of the field labels.
- List all the fields of the longest data model on the left side of the sheet under the “Field Label” heading.
- Place an “x” in the cells of the data model that contain the field to indicate it contains all the fields documented in the left hand column.
- Working left to right, place an “x” to indicate when a data model has a field label contained therein. If the data model has that field but uses a different label, place that label in the cell(4a). If it doesn’t have that field, leave the cell blank. Add any additional fields not in the first data model to the bottom of the Field Labels column (4b).
Do the same thing for the next data models.
- Once you have all the data models documented in this way, then you can look and see what the most popular fields are by seeing which have the most “x”s. Drag those rows to the top, so the most popular fields are on the top, and the least popular fields are on the bottom. I like to color code them, so the most popular fields are one color (green), the moderately popular ones are another (yellow) and the least popular but still repeated fields are another (red).
- Once you have done all this, you should present it to your stakeholder community and ask them for feedback. Some good questions are: (a) If our data model were just the colored fields, would that be sufficient? Why or why not? What fields should we add or subtract? (b) Data model #1 uses label x for a field while data model #2 uses label y. What label should we use for this and why?
- Once people start engaging with these questions, layout the emerging data model in a new sheet, horizontally in the first row. Call this sheet a “draft template”. Bring the color coding with it to make it easier for people to recognize that the models are the same. As people give feedback, make the changes to the “template” sheet while leaving the “comparison” sheet as a reference. Encourage people to make their comment directly in the cell they’re referencing.
- Once all comments have been addresses and everyone is feeling good about the template sheet, announce that sheet is the “official proposal” of a shared data model/standard. Give people a deadline to make their comments and requests for changes. If no comments/changes are requested – congratulations: you have created a shared data model! Good luck getting people to use it. 😉
Do you find yourself creating shared data models? Do you have other processes for making them? Did you try out this process and have some feedback? Is this documentation clear? Tell me what you’re thinking in the comments below.
Originally posted at OpenReferral
In the wake of Superstorm Sandy, many residents of New York City were left struggling.
Though a broad array of supportive services were available to survivors — from home rebuilding funds to mental health treatment — it’s often hard for people to know what’s available and how to access it. New York City lacks any kind of centralized system of information about non-profit health and human services. Given the centrality of non-profit organizations in disaster relief and recovery in the United States, this information scarcity means that for many NYC residents recovery from Sandy never quite happened.
As in any federally-declared emergency scenario, every officially-designated disaster case management program was mandated to use the same information system — the Coordinated Access Network (CAN.org) — to manage survivors’ access to benefits and other steps along the path to recovery. CAN has its own resource directory system, but it is proprietary and not available to the public; survivors often need to make a phone call to a case manager to get even the most basic information about the services. In conversations with those case managers who have had the privilege of being able to access this resource, we’ve heard that its interface is confusing and its data is often duplicated and outdated.
As a result, most disaster case management agencies ended up managing their own resource directories — in isolation from each other. Some organizations were able to cobble together relatively comprehensive service directories, but others don’t have any, and rely on individual case managers to solve the problem themselves. Now, just a bit over two years after the storm, the funding for these disaster case management programs is coming to a close — and so the local, personal knowledge about Sandy recovery services held by these social workers will disappear.
Yet the need remains great. Less than 3% of houses that applied to be rebuilt after Sandy have been completed – and people involved know that this may be a decade-long process for thousands of New Yorkers. The organizations that will serve them will be local, under-funded or entirely unfunded, and organized through a volunteer-based ‘long-term recovery organizations’.
Our organization, Sarapis, has been providing free/libre/open-source software solutions to grassroots groups and long term recovery coalitions since the storm first hit New York City in October 2012. Through our community technology initiative, NYC:Prepared, we’ve been helping community-based recovery groups make information about critical services accessible to the public. We’ve aggregated what may be the most comprehensive and searchable directory of services for Sandy victims in NYC on the web (a scary thought, considering our organization’s tiny budget).
The data in our directory comes from a hodgepodge of sources: nonprofit websites, PDF printout, shared spreadsheets created by long term recovery group members, and .CSVs produced by individual case managers passionate about sharing resources. Initially, we used Google Spreadsheet and Fusion tables to manage all of this.
With the introduction of the Human Services Data Specification (HSDS), through the Open Referral initiative, we’re now able to manage this information using a standardized, well documented format that others can also use and share. And that’s precisely what we try to encourage others to do.
Openly accessible, standardized human service directory data is critical for each of the phases of a disaster. For disaster preparedness, service information can help identify gaps in the allocation of resources that communities might need during a disaster. For disaster response, many different kinds of organizations and service providers need simultaneous access to the same information. For disaster recovery, survivors need an array of services to get back on their feet, and they should be able to find this information in a variety of ways.
With the Ohana API, we can glimpse a world in which all of the needs above can be met. So we’ve deployed a demonstration implementation of Ohana at http://services.nycprepared.org. In Ohana, we now have a lightweight admin interface for organizing our data and a front-end application to serve it to the public in a beautiful and mobile friendly way. Since Ohana is an API, other developers can use it to make whatever interfaces they please.
While we’re quite impressed with the Ohana product, its out-of-the-box web search interface won’t meet everyone’s needs. The system that we’d most like to use would be our open source disaster management software called Sahana. Sahana is the world’s leading open source resource management software and we want to build a component — available to any community — that will enable it to consume, produce and deliver HSDS-compatible resource directory data.
By making it possible for any agency using Sahana-based systems to consume and publish resource directory data in the Open Referral format, we can shift the entire field of relief and recovery agencies towards more interoperable, sustainable, and reliable practices. Sahana specialists are ready to develop this open source, HSDS-compatible resource directory component — at an estimated cost of $5,000. Please consider donating to our effort. And please reach out to Sarapis if you know of other communities and use cases in which this technology could enhance resilience in the face of crisis.