Cities Can Prepare for Trump by Establishing Digital Service Organizations and Mobilizing Civic Tech Communities

Originally posted at municipalist.org

Within a few weeks of Trump’s victory, mayors of big “sanctuary cities” throughout America, including New York, Chicago and Los Angeles declared that they wouldn’t collaborate with a Trump administration order to deport peaceful, law-abiding resident. Trump is now threatening that he will deny these cities federal funding unless they comply. The amount of money that cities could be denied by the Trump administration isn’t entirely clear, but Mother Jones estimates that Washington DC could potentially lose up to 25% of its budget, New York and San Francisco could lose 10% and Los Angeles could lose 2%.

If cities want to have a leg to stand on during their negotiations with the Trump administration, they must prepare to operate without federal funding. If there is one message US cities need to convey to Trump, it’s that they can turn Trump’s belligerence into the political will they need to make municipal government more  efficient, transparent and participatory than the Federal government; and in the process restructure the relationship between municipalities and nations. Trump and his supporters must realize that the more pressure the Federal government puts on cities, the more cities will unite together, and the faster an emergent, post-nation-state paradigm will emerge. If In short, if Trump doesn’t play his cards right, he could very well become the president that undermines the role of the nation-state in global affairs and kicks off a new version of the “devolution revolution“, but this time based in cities and inspired by progressive values.

Municipal governments will not be able to fend off the federal government if their bureaucracies are inefficient and unpopular with the public. Most municipal bureaucracies were designed in an era of switchboards and memos and need a significant upgrade. Is there really any doubt that new systems designed around smart phones and open source software couldn’t out perform the many-decades-old legacy systems most cities currently use by significant margins? The factor limiting the upgrading of municipal bureaucracies are political, not technological. Changing how government works involves shifting the balance of power within agencies, department and groups. These types of changes require tremendous amounts of buy-in from members of the bureaucracy and the public in general. This buy-in is hard to get, but with the nightmare of Trump using federal funds as leverage to coerce cities to adopt policies their residents abhor, it will become much easier to make the case that municipalities must engage in serious internal reform.

The choice for city residents should be clear: adopt 21st century technologies and organizational forms, or submitting to federal coercion. If current city leaders can’t or won’t execute the reforms needed to wean their cities off federal funds, then new leaders need to be brought in who will. Instead of talking about it — let’s build it. For our cities. And now. As if the lives of our neighbors depends on it. Because it might.

Existing models show us how we can systematically transforming government agencies through the adoption and use of inexpensive open source tools and techniques. One group that performs this type of activity is 18F, a unit within the Federal Government’s General Services Administration. 18F helps federal agencies figure out how to improve their operations using open source technology and iterative development processes. They’ve been extremely successful, to the point where government contractors lodged an official complaint that 18F was hurting their businesses because they were saving the Federal government too much money.  18F’s is small group in a massive federal government so their impact is limited, but their model is spreading. The Pentagon’s Defense Digital Services and the White Houses US Digital Service both model themselves off of 18F. City governments could and should create similar types of Digital Service Organizations (DSOs) as a means of increasing their ability to not only do more with less, but also as a means of challenging the Trump administration’s competence.

One of the innovative features of DSOs is their commitments to clear documentation of business processes and utilization of open source software. This allows them to share the innovations they develop for one agency with other agencies within that government (and ideally with other governments around the world.) This eliminates complex procurement processes, reduces costs and even creates an opportunity for highly skilled developers outside government to contribute to their effort. Since the solutions DSOs create are often open source, they can (and do) set up bounty systems that allow software developers to submit code that solve problems identified by the DSO. Allowing highly skilled urban residents to contribute code to a project that improves a city’s effectiveness if precisely the type of deep contribution city residents should be able to make to defend their cities from federal coercion.

There are existing “civic tech” volunteer groups in cities all around the country filled with people passionate about finding ways to help city governments run faster, better and cheaper. A great example is NYC’s BetaNYC group. These groups present fantastic venues for sourcing and organizing volunteers that can amplify and support the work of DSOs to help make cities more resilient to federal coercion. But technology is just one area. Cities will need to build many more mechanisms that can convert their resident’s anger at Federal policies into surges of local volunteer-ship that increase the capability of city governments and reduce their need for federal aid.

If cities can find more effective ways to mobilize their massive human resources, then the era of Trump will be a catalyst pushing cities to be more efficient, autonomous and globally networked than ever before. This might sound like overkill, or too much work, but we have to be prepared if we want to defend ourselves and our neighbors from destructive federal actions. And if it turns out we overreacted and mistakenly volunteered to improve our cities, so it goes.

Introducing Data Models for Human(itarian) Services

This was originally posted at Sarapis

Immediately after a disaster, information managers collect information about who is doing what, where, and turn it into “3W Reports.” While some groups have custom software for collecting this information, the most popular software tool for this work is the spreadsheet. Indeed, the spreadsheet is still the “lingua franca” of the humanitarian aid community, which is why UNOCHA’s Humanitarian Data Exchange project is designed to support people using this popular software tool.

After those critical first few days, nonprofits and government agencies often transition their efforts from ad hoc emergency relief and begin to provide more consistent “services” to the affected population.

The challenge of organizing this type of “humanitarian/human services” information is a bit different than the challenges associated with disaster-related 3W reports, and similar to the work being done by people who manage and maintain persistent nonprofit services directories. In the US, these types of providers are often called “211” because you can dial “211” in many communities in the US to be connected to a call center with access to a directory of local nonprofit service information.

During the ongoing migrant crisis facing Europe, a number of volunteer technical communities (VTCs)  in the Digital Humanitarian Network engaged in the work of managing data about these humanitarian services. They quickly realized they needed to come up with a shared template for this information so they could more easily merge data with their peers, and also so that during the next disaster, they didn’t have to reinvent the wheel all over again.

Since spreadsheets are the most popular information management tool, the group decided to focus on creating a standard set of column headers for spreadsheets with the following criteria:

To create this shared data model, we analyzed a number of existing service data models, including:

  • Stand By Task Force’s services spreadsheet
  • Advisor.UNHCR services directory
  • Open Referral Human Service Data Standard (HSDS)

The first two data models came from the humanitarian sector and were relatively simple and easy to analyze. The third, Open Referral, comes from a US-based nonprofit service directory project that did not assume that spreadsheets would be an important medium for sharing and viewing data.

To effectively incorporate Open Referral into our analysis, we had to convert it into something that could be viewed in a single sheet of a spreadsheet (we call it “flat”). During the process we also made it compliant with the Humanitarian Exchange Language (HXL), which will enable Open Referral to collaborate more with the international humanitarian aid community on data standards work. Check out the Open Referral HSDS_flat sheet to see the work product.

We’re excited about the possibility that Open Referral will take this “flat” version under their wing and maintain it going forward.

Once we had a flat version of Open Referral, we could do some basic analysis of the three models to create a shared data model. You can learn about our process in our post “10 Steps to Create a Shared Data Model with Spreadsheets.”

The results of that work is what we’re calling the Humanitarian Service Data Model (HSDM). The following documents and resources (hopefully) make it useful to you and your organizations.

We hope the HSDM will be used by the various stakeholders who were involved in the process of making it, as well as other groups that routinely manage this type of data, such as:

  • member organizations of the Digital Humanitarian Network
  • grassroots groups that come together to collate information after disasters
  • big institutions like UNOCHA who maintain services datasets
  • software developers who make apps to organize and display service information

I hope that the community that came together to create the HSDM will continue to work together to create a taxonomy for #service+type (what the service does) and #service+eligibility (who the service is for). If and when that work is completed, digital humanitarians will be able to more easily create and share critical information about services available to people in need.

* Photo credits: John Englart (Takver)/Flickr CC-by-SA

Creating a Shared Data Model with a Spreadsheet

Over the last year, a number of clients have tasked me with bringing datasets from many different sources together. It seems many people and groups want to work more closely with their peers to  not only share and merge data resources, but to also work with them to arrive at a “shared data model” that they can all use to manage data in compatible ways going forward.

Since spreadsheets are, by far, the most popular data collection and management tool, using spreadsheets for this type of work is a no-brainer.

After doing this task a few times, I’ve gotten confident enough to document my process for taking a bunch of different spreadsheet data models and turning them in a single shared one.

Here is the 10-step process:

  1. Create a spreadsheet. First column is for field labels. You can add additional columns for other information you’d like to analyze about the field such as its data type, database name and/or reference taxonomies (i.e. HXL Tag).
  2. Place the names of the data models you’ve selected to analyze in the column headers to the right of the field labels.
  3. List all the fields of the longest data model on the left side of the sheet under the “Field Label” heading.
  4. Place an “x” in the cells of the data model that contain the field to indicate it contains all the fields documented in the left hand column.

    How to Create a Shared Data Model with Spreadsheets
    This is a sheet comparing three different data models with a set of field labels and a “taxonomy convention”.
  5. Working left to right, place an  “x” to indicate when a data model has a field label contained therein. If the data model has that field but uses a different label, place that label in the cell(4a). If it doesn’t have that field, leave the cell blank. Add any additional fields not in the first data model to the bottom of the Field Labels column (4b).

  6. Do the same thing for the next data models.
  7. Once you have all the data models documented in this way, then you can look and see what the most popular fields are by seeing which have the most “x”s. Drag those rows to the top, so the most popular fields are on the top, and the least popular fields are on the bottom. I like to color code them, so the most popular fields are one color (green), the moderately popular ones are another (yellow) and the least popular but still repeated fields are another (red).
  8. Once you have done all this, you should present it to your stakeholder community and ask them for feedback. Some good questions are: (a) If our data model were just the colored fields, would that be sufficient? Why or why not? What fields should we add or subtract? (b) Data model #1 uses label x for a field while data model #2 uses label y. What label should we use for this and why?

    How to Create a Shared Data Model with Spreadsheets (1)
    Give people a “template” they can use to actually manage their data.
  9. Once people start engaging with these questions, layout the emerging data model in a new sheet, horizontally in the first row. Call this sheet a “draft template”. Bring the color coding with it to make it easier for people to recognize that the models are the same. As people give feedback, make the changes to the “template” sheet while leaving the “comparison” sheet as a reference. Encourage people to make their comment directly in the cell they’re referencing.
  10. Once all comments have been addresses and everyone is feeling good about the template sheet, announce that sheet is the “official proposal” of a shared data model/standard. Give people a deadline to make their comments and requests for changes. If no comments/changes are requested – congratulations: you have created a shared data model! Good luck getting people to use it. 😉

Do you find yourself creating shared data models? Do you have other processes for making them? Did you try out this process and have some feedback? Is this documentation clear? Tell me what you’re thinking in the comments below.

DIY Databases are Coming

“The software revolution has given people access to countless specialized apps, but there’s one fundamental tool that almost all apps use that still remains out of reach of most non-programmers — the database.” AirTable.com on CrunchBase

Database technology is boring but immensely important. If you have ever been working on a spreadsheet and wanted to be able to click on the contents of a cell to get to another table of data (maybe the cell has a person’s name and you want to be able to click it to see their phone #, photo, email, etc), then you’ve wished for a DIY database.

I’ve been waiting for this technology for many years and am happy to report that it’s nearly arrived. Two startups are taking on the DIY database challenge from different sides:

Screenshot from 2016-01-20 18:20:27
If you can make a spreadsheet you can make a map with Awesome-Table.


Awesome-Table is a quick and easy tool for creating visualizations of data inside Google Sheets. It offers a variety of searchable, sortable, filterable views including tables, cards, maps and charts. They’re easy to embed so they are great for creating and embedding directory data onto websites. Here’s an awesome table visualization of
worker coops in NYC.


AirTable is a quick and easy way to create tables that connect to and reference each other. This allows for multi-faceted systems you can travel through by clicking on entities. For example, you can define people in one table, organizations in another, and offices in a third, and then connect them all together so a user can browse a list of people, click on an individual’s organization, and then see all that organization’s information, including its many offices. Pretty useful!

The progress of these two startups leads me to believe we’re less than a year or two away from truly lightweight, easy to use, free of cost, DIY database building systems, and an open source one not too long after that.

The increasing accessibility of database technology has a lot of implications. The most obvious one is that it will enable people to build their own information management systems for common use cases like contact directories, CRM systems and other applications that just can’t be done with existing spreadsheet technology. This will make a wide variety of solutions more accessible to people – so if you want to start or run a business, manage common information resource, or just organize personal information better, you’ll enjoy DIY databases very much.

More interesting to me is the implication that they can have for people trying to reform and democratize institutions.

If you spend time in the type of information management systems used by institutions big and small – whether it’s government agencies like the sanitation department or educational ones like high schools or universities, you’ll quickly notice that many of their most useful and critical tools are nothing more than a set of data tables (directories) and visualizations of the data contained therein (search/filterable tables, cards and maps of that data.)

Screenshot from 2016-01-20 18:19:07
Turn spreadsheets into searchable/filterable directories with Awesome-Table.

These very rudimentary but widely used internal software systems not only define the information people within that institution can access and share, but also limits them to very specific workflows that are implicitly or explicitly defined in the software. Since workflows define the work people actually do, the people who control the workflow are also people who control the workers.

If you want to change how an institution does things, you have to be able to change its information management systems. Since current database technology requires specialized software coding skills, changing these systems often turns into a bureaucratic nightmare filled with bottlenecks. First, a specific group of pre-approved people need to agree to design and fund a change, then another specific group of people need to program and implement the change, and yet another group is often tasked with training and supporting users who then have to use the updated system. That creates a lot of potential bottlenecks: executives who don’t know a change is needed or don’t care enough to fund the work; managers who don’t want to get innovated out of a job or don’t know how to design good software; technologists who don’t have the time to implement a change or don’t have the motivation to do the job right. With all those potential bottlenecks it’s easy to see why so many well funded institutions have such crappy software and archaic workflows.

When people try to improve institutions, they are often trying to improve workflows so more can get done with less time and resources. Unfortunately, the people who actually know what changes need to be made are rarely in a position to control the architecture of the databases they use to get things done.

With DIY databases, people within institutions can circumvent all these bottlenecks simply by making superior systems themselves. This can change a lot more than simply the type of information people have access to – it allows them to explore news ways of being productive.  What they’ll inevitably discover, particularly if they’re in an institution that spends a lot of time managing information, is that they can do a better job managing information than many of their bosses.

DIY databases are enabling the type of horizontal and bottom-up innovation essential not just for better functioning institutions, but also more democratic ones. Databases are the “means of production” for many information workers. When they can build and own their own ones, they’ll be able to achieve more ownership of their own work and take another big step towards being able to manage themselves.

Of course, as technology improves and creating you own databases becomes easy, the hard part will certainly become getting peers to use them.  That’s a topic for another day.

“Open Tech and Open Data: The Key to Whole Community Engagement” at IAEM 2015

The International Association of Emergency Management (IAEM) Conference was described to me as the Oscars of Emergency Management field. The event took place in the Paris Hotel in Las Vega November 14th. It was three days after the Paris attacks. Walking under the hotels faux Eiffel Tower and through its simulated Parisian streets was uncomfortable.

Nevertheless, the event was quite informative. Right before my presentation was a session about building your own emergency operations center (EOC) with inexpensive off the shelf tools and another one by the head of St Louis’s Office of Emergency Management explaining how he managed the reaction to the killing of Michael Brown.

My presentation wasn’t as well attended as I had hoped. Maybe the title wasn’t compelling. But it went well. The audience was engaged and we had a good back and forth. Unfortunately, due to technical problems, my sessions wasn’t recorded like all the others. I would have really liked to have seen that video. Instead, at the request of the IAEM, I recorded my presentation via Hangout Live. You can see that video here.

This presentation is the most well rounded of them all. It gives a solid overview of the four facets of open aid:

  • Open Technologies
  • Open Data
  • Grassroots Disaster Relief Networks
  • Volunteer Technical Communities

At the end it offers a diagram for how we build an integrated information management ecosystem cycling information from local community groups through municipal, state and federal agencies and channel resources effectively.

Google Presentation

Video of Presentation

PDF Archive

Preparing for the Worst, Hoping for the Best: Data Standards, Superstorm Sandy, and Our Resilient Future

Originally posted at OpenReferral

In the wake of Superstorm Sandy, many residents of New York City were left struggling.

Occupy Sandy Relief Effort at St. Matthew St. Luke Episcopal

Though a broad array of supportive services were available to survivors — from home rebuilding funds to mental health treatment — it’s often hard for people to know what’s available and how to access it. New York City lacks any kind of centralized system of information about non-profit health and human services. Given the centrality of non-profit organizations in disaster relief and recovery in the United States, this information scarcity means that for many NYC residents recovery from Sandy never quite happened.

As in any federally-declared emergency scenario, every officially-designated disaster case management program was mandated to use the same information system — the Coordinated Access Network (CAN.org) — to manage survivors’ access to benefits and other steps along the path to recovery. CAN has its own resource directory system, but it is proprietary and not available to the public; survivors often need to make a phone call to a case manager to get even the most basic information about the services. In conversations with those case managers who have had the privilege of being able to access this resource, we’ve heard that its interface is confusing and its data is often duplicated and outdated.

As a result, most disaster case management agencies ended up managing their own resource directories — in isolation from each other. Some organizations were able to cobble together relatively comprehensive service directories, but others don’t have any, and rely on individual case managers to solve the problem themselves. Now, just a bit over two years after the storm, the funding for these disaster case management programs is coming to a close — and so the local, personal knowledge about Sandy recovery services held by these social workers will disappear.

Yet the need remains great. Less than 3% of houses that applied to be rebuilt after Sandy have been completed – and people involved know that this may be a decade-long process for thousands of New Yorkers. The organizations that will serve them will be local, under-funded or entirely unfunded, and organized through a volunteer-based ‘long-term recovery organizations’.

sarapis-logo-red-300x64Our organization, Sarapis, has been providing free/libre/open-source software solutions to grassroots groups and long term recovery coalitions since the storm first hit New York City in October 2012. Through our community technology initiative, NYC:Prepared, we’ve been helping community-based recovery groups make information about critical services accessible to the public.  We’ve aggregated what may be the most comprehensive and searchable directory of services for Sandy victims in NYC on the web (a scary thought, considering our organization’s tiny budget).

NYC:Prepared's Post-Sandy Recovery Resource directory can be embedded within the websites operated by NYC's volunteer disaster recovery networks.

The data in our directory comes from a hodgepodge of sources: nonprofit websites, PDF printout, shared spreadsheets created by long term recovery group members, and .CSVs produced by individual case managers passionate about sharing resources. Initially, we used Google Spreadsheet and Fusion tables to manage all of this.

With the introduction of the Human Services Data Specification (HSDS), through the Open Referral initiative, we’re now able to manage this information using a standardized, well documented format that others can also use and share. And that’s precisely what we try to encourage others to do.

Openly accessible, standardized human service directory data is critical for each of the phases of a disaster. For disaster preparedness, service information can help identify gaps in the allocation of resources that communities might need during a disaster. For disaster response, many different kinds of organizations and service providers need simultaneous access to the same information. For disaster recovery, survivors need an array of services to get back on their feet, and they should be able to find this information in a variety of ways.

With the Ohana API, we can glimpse a world in which all of the needs above can be met. So we’ve deployed a demonstration implementation of Ohana at http://services.nycprepared.org. In Ohana, we now have a lightweight admin interface for organizing our data and a front-end application to serve it to the public in a beautiful and mobile friendly way.  Since Ohana is an API, other developers can use it to make whatever interfaces they please.

sahanalogoWhile we’re quite impressed with the Ohana product, its out-of-the-box web search interface won’t meet everyone’s needs. The system that we’d most like to use would be our open source disaster management software called Sahana. Sahana is the world’s leading open source resource management software and we want to build a component — available to any community — that will enable it to consume, produce and deliver HSDS-compatible resource directory data.

By making it possible for any agency using Sahana-based systems to consume and publish resource directory data in the Open Referral format, we can shift the entire field of relief and recovery agencies towards more interoperable, sustainable, and reliable practices. Sahana specialists are ready to develop this open source, HSDS-compatible resource directory component — at an estimated cost of $5,000.  Please consider donating to our effort. And please reach out to Sarapis if you know of other communities and use cases in which this technology could enhance resilience in the face of crisis.

“Sharing Data to Improve How We Cooperate, Coordinate, Communicate & Collaborate” at NVOAD 5/14/15

This presentation was delivered at the National Voluntary Organizations Active in Disaster Conference 2015 in New Orleans.

I’ve been an active (and actively marginalized) participant in my local NYCVOAD community, so it was nice to feel accepted by the broader VOAD community.

Of all the presentations I’ve given, this one felt the best. The audience was very engaged and we had a robust back and forth. It felt electric. Outbursts came from the audience. It felt like a unique space. The feedback was fantastic. Much thanks goes to Marie Irvine who helped put the presentation together and who co-presented with me.

This presentation is based around the concept that “Open Networks that efficiently provide relief after a disaster are built on Open Technology and Open Data. It explains NYC:Prepared’s toolset and has extensive training materials about open data within the context of disaster.

Google Presentation

PDF Archive

Scattered Showers of Interest

The establishment, at least a very small subset of it, discovered my work the second week of October.  It wasn’t a thunderstorm of interest — more like scattered showers — but when you’ve been in the desert for a while, a little rain can go a long way.

First stop on my tour was Washington DC, where I spoke on a panel organized by STAR-TIDES at National Defense University about Occupy Sandy, along with Shlomo Roth and Isadora Blachman-Biatch, who was one of the authors of the fantastic Occupy Sandy report funded by DHS.  I think there is video somewhere…

Then, was flown to San Jose by the IEEE for their Global Humanitarian Technology Conference to give a talk about “How Humanitarian Organizations and Grassroots Networks Can Collaborate on Disaster Response and Recovery

The last day of the conference I got on a red eye flight back to New York City so I could give “Grassroots Disaster Relief Network Response to Superstorm Sandy: Successes and Opportunities” at The Christian Regenhard Center for Emergency Response Studies (RaCERS) of John Jay College.

You can find more public presentation about NYC:Prepared here.

 

 Rain designed by SuperAtic LABS from the Noun Project

NYC:Prepared Presentation at RaCERS John Jay College 10/14/14

“The Christian Regenhard Center for Emergency Response Studies (RaCERS) is a unique applied research center focused on documentation of lessons learned and planning for future large-scale incidents.”

I had the honor of presenting to one of their classes of students pursuing masters degrees in Emergency Management as well as a number of professors in the school.

This presentation was very similar to the one at the IEEE HTC Conference a few days earlier, but since it was to a New York focused audience, I explored the connection between Occupy Wall Street and Occupy Sandy a bit more extensively.

The audience reaction was extremely positive. The professors and students asked a ton of questions and everyone expressed frustration with the state of information sharing in the Emergency Management sector. There was one older man who mean mugged me the entire presentation, had no questions and didn’t say a word. I couldn’t tell if he was upset with me for arriving late (sorry!) or because he really didn’t like the way I presented Occupy Wall Street as an important element in the resilience of New York City.

Google Presentation

PDF Archive

NYC:Prepared Presentation at IEEE HTC 2014

I had the honor of presenting NYC:Prepared at the Institute of Electrical and Electronics Engineers Humanitarian Technology Conference 2014 held in San Jose, California.

The presentation situates Occupy Sandy within the context of Occupy Wall Street and explains how social movements prepare participants to respond during disaster.

It goes on to outline four phases of Occupy Sandy activity:

  • Scouting
  • Networking
  • Relationship Building
  • Autonomy Projects

NYC:Prepared is one of the autonomous projects that emerged from Occupy Sandy. The presentation continue with a vision of how grassroots communities and institutional relief providers can use free and open technology to more effectively collaborate.

I review the software and data needs of various stakeholders and propose a set of free and open solutions.  Then I present the various tools and template we’ve made available in New York City and beyond.

The presentation is long and pretty comprehensive – too much so for the audience. They appreciated my style and enthusiasm but in the future I’ll certainly try to reduce the comprehensive nature of the presentation and focus more on precisely what I want to deliver the specific audience.

Presentation on Google

PDF Archive