The definition of “municipalism” is still up for grabs. If you Google the word you’ll be given a snippet from Wikipedia about “libertarian municipalism”, a compelling but very specific utopian political philosophy of Murray Bookchin. Surely “municipalism” can and should mean something more.
Over the last fifty years, the percentage of people around the globe living in urban areas has increased from 30% to over 50%, but cities have not seen a corresponding increase in political power. Instead, nation-states and transnational institutions that network them have become the centers of power relations. Many people predict this dynamic will change: and it is. Efforts like UN Habitat III created space for cities to represent themselves at the UN for the first time in that organization’s history. The C40 Initiative has brought cities together to fight climate change by making significantly more aggressive emission reduction pledges than nation-states did at the Paris Summit. The Global Parliament of Mayors is provides a venue for municipalities to share knowledge and make collective decisions. You can find more entities in our directory.
Over the last two thousands years, cities have frequently been more politically powerful than the nations and empires in which they’ve been located. Cities, municipalities and regional governments have performed many nation-state like functions such as building trade networks, engaging in foreign relations, waging war, completing massive public infrastructure projects and protecting their residents from state violence.
Municipalism should refer to the idea that cities and regions should have more autonomy from the nation-states in which they’re located, while also being active participants in a global network of peer municipalities that upholds human rights and humanitarian standards.
It should help mobilize residents to participate deeply in local problem solving and inspire municipal governments to share solutions with cities around the world.
Most of all, municipalism should provide a positive alternative to the failure of the nation-state and an affirmation that we can recenter political control at the local level while advancing human rights and humanitarian standards globally.
What does “municipalism” mean to you now? What do you think it should mean in the future? Let us know below.
One of the most powerful tools of a modern nation is its central bank’s ability to create money “out of thin air.” Nations can use this new money to purchase their own nation’s debt in the form of treasury bills, bonds and notes, allowing it to spend more than it earns in taxes and other income. If a nation prints too much money, however, it can create inflation, which reduces the value of their currency. In some instances, central banks can lose control of their currency’s inflation rate, destroying the value of the nation’s currency, collapsing its economy and leaving it at the mercy of predatory financial interests. Fear of inflation keeps nation’s from printing infinite amounts of money.
The US dollar is a bit different than other currencies because it isn’t simply the “reserve currency” for the United States, but also functions as the world’s reserve currency. Ever nation in the world uses US dollars because it is the easiest, and sometimes only, currency that can be used to purchase large quantities of commodities in international markets. The most important of these commodities is oil. Some commentators call this monetary arrangement the “petrodollar system” and view it as the successor to the Brenton Woods system, which still relied on nations to maintain gold reserves. The Petrodollar system was established through a series of arrangements between the US and Saudi Arabia in the 1970s.
Since the 1970s, we’ve seen the development of other transnational monetary systems such as the Euro and the development of giant commercial “money center” banks, which have further consolidated the monopoly on monetary production in the hands of fewer and fewer institutions. If you asked an economist a decade ago about the future of global monetary production, they’d have predicted more consolidation. The Euro in Europe would be complemented by the Amero is North America, and slowly but surely, the world would integrate into a single market with a single currency.
The financial collapse of 2008 helped undermine the vision of a global currency, but it was the invention of Bitcoin and the blockchain technology behind it that has given people a viable alternative to global monetary consolidation. Blockchain is a new type of database that is extremely good at producing “digital cash” and executing financial transactions. It’s open source, so there are no limitations or restrictions on who and how this technology can be used. Currently, blockchains are making it possible for people to create secure, digital money systems for extremely low costs. It’s being used by big banks to speed up their SWIFT international fund transfer systems, it’s being used by countries to create new national digital currency systems, and it’s being used by entrepreneurs and online communities to create their own currency systems outside the purview of the nation-state. It’s only a matter of time, it seems, before sub-national governments and municipalities create their own currency systems and begin to challenge the nation-state’s monopoly on the production of money.
Under normal political conditions, the idea that cities and states would risk disrupting the current monetary order by creating their own currency systems would be outrageous. US city and state governments benefit greatly from the US government’s petrodollar system. Not only does the federal government give cities and states significant amounts of money in the form of grants, they also allow people to deduct income from municipal bonds from their federal taxes. The makes it possible for cities and states to access tremendous amounts of capital at a rate much cheaper than corporations or individuals. These municipal bonds are used to fund everything from a local government’s general operations to specific infrastructure projects. But with the Trump administration and sub-national governments around the US on a collision course over immigration and other policies, it’s possible that federal governments will start trying to squeeze the finances of “sanctuary” cities and states. In fact, Trump declared he’ll do precisely that by threatening to cut off federal funding to cities and states that don’t implement his widely unpopular immigration policies. Eliminating the federal tax deduction on municipal bonds would be an even more aggressive move that he could try to use to coerce cities and states to follow his policies.
In the past, the only institutions that cities and states could look to for financial assistance were the federal governments and large commercial banks. But that is changing. The blockchain makes it possible for sub-national governments to create their own financial systems and begin to insulate themselves from federal monetary policy and budgeting decisions. Cities and states could do many things with their own cryptocurrency networks. They could create cryptographically secured paper monies, credit and debit cards and online transaction systems that enable their residents to more easily engage in local commerce, create international remittance systems allowing residents to transmit money around the world, and create new types of financial contracts that aren’t mediated by the commercial banks or federal entities. These monetary systems could be “backed” by valuable assets owned by cities and states such as real estate, taxes and other revenue streams. The technology to implement these types of systems is new, but its developing rapidly. Financial institutions invested nearly $2 billion in blockchain-based technologies in 2016. And the commercial banks are investing billions of dollars a year to continue to improve these alternative systems.
By developing autonomous, networked, blockchain-based financial systems for themselves, cities and states can create deep and direct financial ties with each other and challenge the US government’s monopoly on the production of money. This challenge, if delivered in a credible way, could threaten the US government’s capacity to pay its debts and seriously impact the federal government’s financial health.
I want to be clear: I’m not advocating for a financial war between US cities and states, and the federal government. Rather, I’m recognizing that blockchain-based technologies could enable sub-national governments to build a new type of power that they currently don’t have: the ability to compete with the nation-state-based monetary systems. This threat could be an extremely powerful tool for cities and states when they negotiate with the Trump administration. If the federal government is going to threaten to undermine the financial health of cities and states, then cities and states should find ways to credibly threaten the federal government right back.
If you’d like to read more about how the blockchain technology fits into a broader history of DIY finance, check out my essay Finance without Force.
Within a few weeks of Trump’s victory, mayors of big “sanctuary cities” throughout America, including New York, Chicago and Los Angeles declared that they wouldn’t collaborate with a Trump administration order to deport peaceful, law-abiding resident. Trump is now threatening that he will deny these cities federal funding unless they comply. The amount of money that cities could be denied by the Trump administration isn’t entirely clear, but Mother Jones estimates that Washington DC could potentially lose up to 25% of its budget, New York and San Francisco could lose 10% and Los Angeles could lose 2%.
If cities want to have a leg to stand on during their negotiations with the Trump administration, they must prepare to operate without federal funding. If there is one message US cities need to convey to Trump, it’s that they can turn Trump’s belligerence into the political will they need to make municipal government more efficient, transparent and participatory than the Federal government; and in the process restructure the relationship between municipalities and nations. Trump and his supporters must realize that the more pressure the Federal government puts on cities, the more cities will unite together, and the faster an emergent, post-nation-state paradigm will emerge. If In short, if Trump doesn’t play his cards right, he could very well become the president that undermines the role of the nation-state in global affairs and kicks off a new version of the “devolution revolution“, but this time based in cities and inspired by progressive values.
Municipal governments will not be able to fend off the federal government if their bureaucracies are inefficient and unpopular with the public. Most municipal bureaucracies were designed in an era of switchboards and memos and need a significant upgrade. Is there really any doubt that new systems designed around smart phones and open source software couldn’t out perform the many-decades-old legacy systems most cities currently use by significant margins? The factor limiting the upgrading of municipal bureaucracies are political, not technological. Changing how government works involves shifting the balance of power within agencies, department and groups. These types of changes require tremendous amounts of buy-in from members of the bureaucracy and the public in general. This buy-in is hard to get, but with the nightmare of Trump using federal funds as leverage to coerce cities to adopt policies their residents abhor, it will become much easier to make the case that municipalities must engage in serious internal reform.
The choice for city residents should be clear: adopt 21st century technologies and organizational forms, or submitting to federal coercion. If current city leaders can’t or won’t execute the reforms needed to wean their cities off federal funds, then new leaders need to be brought in who will. Instead of talking about it — let’s build it. For our cities. And now. As if the lives of our neighbors depends on it. Because it might.
Existing models show us how we can systematically transforming government agencies through the adoption and use of inexpensive open source tools and techniques. One group that performs this type of activity is 18F, a unit within the Federal Government’s General Services Administration. 18F helps federal agencies figure out how to improve their operations using open source technology and iterative development processes. They’ve been extremely successful, to the point where government contractors lodged an official complaint that 18F was hurting their businesses because they were saving the Federal government too much money. 18F’s is small group in a massive federal government so their impact is limited, but their model is spreading. The Pentagon’s Defense Digital Services and the White Houses US Digital Service both model themselves off of 18F. City governments could and should create similar types of Digital Service Organizations (DSOs) as a means of increasing their ability to not only do more with less, but also as a means of challenging the Trump administration’s competence.
One of the innovative features of DSOs is their commitments to clear documentation of business processes and utilization of open source software. This allows them to share the innovations they develop for one agency with other agencies within that government (and ideally with other governments around the world.) This eliminates complex procurement processes, reduces costs and even creates an opportunity for highly skilled developers outside government to contribute to their effort. Since the solutions DSOs create are often open source, they can (and do) set up bounty systems that allow software developers to submit code that solve problems identified by the DSO. Allowing highly skilled urban residents to contribute code to a project that improves a city’s effectiveness if precisely the type of deep contribution city residents should be able to make to defend their cities from federal coercion.
There are existing “civic tech” volunteer groups in cities all around the country filled with people passionate about finding ways to help city governments run faster, better and cheaper. A great example is NYC’s BetaNYC group. These groups present fantastic venues for sourcing and organizing volunteers that can amplify and support the work of DSOs to help make cities more resilient to federal coercion. But technology is just one area. Cities will need to build many more mechanisms that can convert their resident’s anger at Federal policies into surges of local volunteer-ship that increase the capability of city governments and reduce their need for federal aid.
If cities can find more effective ways to mobilize their massive human resources, then the era of Trump will be a catalyst pushing cities to be more efficient, autonomous and globally networked than ever before. This might sound like overkill, or too much work, but we have to be prepared if we want to defend ourselves and our neighbors from destructive federal actions. And if it turns out we overreacted and mistakenly volunteered to improve our cities, so it goes.
The AIANY invited me to present my perspective on Occupy Sandy at their event “Stand Up! How to be Part of the Solution after a Disaster.” My presentation argues that Occupy Sandy, and the mutual aid work of its predecessor Occupy Wall Street, were physical-world manifestations of the “Open Aid” trend taking place in the disaster relief and humanitarian aid sectors.
The presentation begins by pointing to the fact that “faith in institutions” is at an unprecedented low in the USA at the same time as our economy is being transformed by widespread access to networked communication technologies. These technologies enable autonomously organized, local grassroots disaster response efforts to network with each other to create a new type of entity that the Department of Defense is calling “Grassroots Disaster Relief Network” (GDRN). In the virtual world, networked communication technologies are also allowing people with specialized technical skills to organize themselves into groups that can provide information processing services through a wide variety of tools including social media, GIS and collaborative documents. These groups are called Volunteer Technical Communities (VTCs).
I argue that GDRNs are a local/physical manifestation of the “Open Aid” concept, and VTCs are a global/digital one. Currently VTCs tend to serve formal response organizations such as UNOCHA, but in the not-too-distant future they’ll be able to collaborate directly with GDRNs, giving disaster survivors and their communities unprecedented access to information.
The presentation ends with some suggestions for how we can set up simple, open source systems to streamline information flows related to disasters.
I gave a very similar presentation to disaster response personnel at the Disaster Preparedness Exchange in Indianapolis a week later.
This conference was my first time engaging, in-person, with the disaster relief community outside the United States and I was extremely impressed. Unlike most conferences I attend in which there is an abstract “theme” with random and broad sessions, this conference had a laser-like focus on a very specific data standard called the Common Alerting Protocol (CAP). The goal of this protocol is to facilitate in the exchange of “all-hazard emergency alerts and public warnings over all kinds of networks.”
Presentations and discussion were focused on the design and implementation of this specific data standard. There were also sessions organized in which stakeholders worked together to create field-by-field recommendations for how to improve future versions of CAP. The amount of information that was shared and the effective collaborations that took place were inspiring.
We need many many more events that are focused exclusively on the design and implementation of data standards within the disaster relief and resilience community. If we can come together to create a shared language and set of data standards for our work, then information sharing will become radically easier. Easier information sharing leads to better situational awareness, more efficient resource distribution, and more positive outcomes.
I’m look forward to bringing some of the the tools and techniques I learned at this event back with me to the USA.
Immediately after a disaster, information managers collect information about who is doing what, where, and turn it into “3W Reports.” While some groups have custom software for collecting this information, the most popular software tool for this work is the spreadsheet. Indeed, the spreadsheet is still the “lingua franca” of the humanitarian aid community, which is why UNOCHA’s Humanitarian Data Exchange project is designed to support people using this popular software tool.
After those critical first few days, nonprofits and government agencies often transition their efforts from ad hoc emergency relief and begin to provide more consistent “services” to the affected population.
The challenge of organizing this type of “humanitarian/human services” information is a bit different than the challenges associated with disaster-related 3W reports, and similar to the work being done by people who manage and maintain persistent nonprofit services directories. In the US, these types of providers are often called “211” because you can dial “211” in many communities in the US to be connected to a call center with access to a directory of local nonprofit service information.
During the ongoing migrant crisis facing Europe, a number of volunteer technical communities (VTCs) in the Digital Humanitarian Network engaged in the work of managing data about these humanitarian services. They quickly realized they needed to come up with a shared template for this information so they could more easily merge data with their peers, and also so that during the next disaster, they didn’t have to reinvent the wheel all over again.
Since spreadsheets are the most popular information management tool, the group decided to focus on creating a standard set of column headers for spreadsheets with the following criteria:
To create this shared data model, we analyzed a number of existing service data models, including:
Stand By Task Force’s services spreadsheet
Advisor.UNHCR services directory
Open Referral Human Service Data Standard (HSDS)
The first two data models came from the humanitarian sector and were relatively simple and easy to analyze. The third, Open Referral, comes from a US-based nonprofit service directory project that did not assume that spreadsheets would be an important medium for sharing and viewing data.
To effectively incorporate Open Referral into our analysis, we had to convert it into something that could be viewed in a single sheet of a spreadsheet (we call it “flat”). During the process we also made it compliant with the Humanitarian Exchange Language (HXL), which will enable Open Referral to collaborate more with the international humanitarian aid community on data standards work. Check out the Open Referral HSDS_flat sheet to see the work product.
We’re excited about the possibility that Open Referral will take this “flat” version under their wing and maintain it going forward.
We hope the HSDM will be used by the various stakeholders who were involved in the process of making it, as well as other groups that routinely manage this type of data, such as:
member organizations of the Digital Humanitarian Network
grassroots groups that come together to collate information after disasters
big institutions like UNOCHA who maintain services datasets
software developers who make apps to organize and display service information
I hope that the community that came together to create the HSDM will continue to work together to create a taxonomy for #service+type (what the service does) and #service+eligibility (who the service is for). If and when that work is completed, digital humanitarians will be able to more easily create and share critical information about services available to people in need.
* Photo credits: John Englart (Takver)/Flickr CC-by-SA
Over the last year, a number of clients have tasked me with bringing datasets from many different sources together. It seems many people and groups want to work more closely with their peers to not only share and merge data resources, but to also work with them to arrive at a “shared data model” that they can all use to manage data in compatible ways going forward.
Since spreadsheets are, by far, the most popular data collection and management tool, using spreadsheets for this type of work is a no-brainer.
After doing this task a few times, I’ve gotten confident enough to document my process for taking a bunch of different spreadsheet data models and turning them in a single shared one.
Here is the 10-step process:
Create a spreadsheet. First column is for field labels. You can add additional columns for other information you’d like to analyze about the field such as its data type, database name and/or reference taxonomies (i.e. HXL Tag).
Place the names of the data models you’ve selected to analyze in the column headers to the right of the field labels.
List all the fields of the longest data model on the left side of the sheet under the “Field Label” heading.
Place an “x” in the cells of the data model that contain the field to indicate it contains all the fields documented in the left hand column.
Working left to right, place an “x” to indicate when a data model has a field label contained therein. If the data model has that field but uses a different label, place that label in the cell(4a). If it doesn’t have that field, leave the cell blank. Add any additional fields not in the first data model to the bottom of the Field Labels column (4b).
Do the same thing for the next data models.
Once you have all the data models documented in this way, then you can look and see what the most popular fields are by seeing which have the most “x”s. Drag those rows to the top, so the most popular fields are on the top, and the least popular fields are on the bottom. I like to color code them, so the most popular fields are one color (green), the moderately popular ones are another (yellow) and the least popular but still repeated fields are another (red).
Once you have done all this, you should present it to your stakeholder community and ask them for feedback. Some good questions are: (a) If our data model were just the colored fields, would that be sufficient? Why or why not? What fields should we add or subtract? (b) Data model #1 uses label x for a field while data model #2 uses label y. What label should we use for this and why?
Once people start engaging with these questions, layout the emerging data model in a new sheet, horizontally in the first row. Call this sheet a “draft template”. Bring the color coding with it to make it easier for people to recognize that the models are the same. As people give feedback, make the changes to the “template” sheet while leaving the “comparison” sheet as a reference. Encourage people to make their comment directly in the cell they’re referencing.
Once all comments have been addresses and everyone is feeling good about the template sheet, announce that sheet is the “official proposal” of a shared data model/standard. Give people a deadline to make their comments and requests for changes. If no comments/changes are requested – congratulations: you have created a shared data model! Good luck getting people to use it. 😉
Do you find yourself creating shared data models? Do you have other processes for making them? Did you try out this process and have some feedback? Is this documentation clear? Tell me what you’re thinking in the comments below.
“The software revolution has given people access to countless specialized apps, but there’s one fundamental tool that almost all apps use that still remains out of reach of most non-programmers — the database.” AirTable.com on CrunchBase
Database technology is boring but immensely important. If you have ever been working on a spreadsheet and wanted to be able to click on the contents of a cell to get to another table of data (maybe the cell has a person’s name and you want to be able to click it to see their phone #, photo, email, etc), then you’ve wished for a DIY database.
I’ve been waiting for this technology for many years and am happy to report that it’s nearly arrived. Two startups are taking on the DIY database challenge from different sides:
Awesome-Table is a quick and easy tool for creating visualizations of data inside Google Sheets. It offers a variety of searchable, sortable, filterable views including tables, cards, maps and charts. They’re easy to embed so they are great for creating and embedding directory data onto websites. Here’s an awesome table visualization of worker coops in NYC.
AirTable is a quick and easy way to create tables that connect to and reference each other. This allows for multi-faceted systems you can travel through by clicking on entities. For example, you can define people in one table, organizations in another, and offices in a third, and then connect them all together so a user can browse a list of people, click on an individual’s organization, and then see all that organization’s information, including its many offices. Pretty useful!
The progress of these two startups leads me to believe we’re less than a year or two away from truly lightweight, easy to use, free of cost, DIY database building systems, and an open source one not too long after that.
The increasing accessibility of database technology has a lot of implications. The most obvious one is that it will enable people to build their own information management systems for common use cases like contact directories, CRM systems and other applications that just can’t be done with existing spreadsheet technology. This will make a wide variety of solutions more accessible to people – so if you want to start or run a business, manage common information resource, or just organize personal information better, you’ll enjoy DIY databases very much.
More interesting to me is the implication that they can have for people trying to reform and democratize institutions.
If you spend time in the type of information management systems used by institutions big and small – whether it’s government agencies like the sanitation department or educational ones like high schools or universities, you’ll quickly notice that many of their most useful and critical tools are nothing more than a set of data tables (directories) and visualizations of the data contained therein (search/filterable tables, cards and maps of that data.)
These very rudimentary but widely used internal software systems not only define the information people within that institution can access and share, but also limits them to very specific workflows that are implicitly or explicitly defined in the software. Since workflows define the work people actually do, the people who control the workflow are also people who control the workers.
If you want to change how an institution does things, you have to be able to change its information management systems. Since current database technology requires specialized software coding skills, changing these systems often turns into a bureaucratic nightmare filled with bottlenecks. First, a specific group of pre-approved people need to agree to design and fund a change, then another specific group of people need to program and implement the change, and yet another group is often tasked with training and supporting users who then have to use the updated system. That creates a lot of potential bottlenecks: executives who don’t know a change is needed or don’t care enough to fund the work; managers who don’t want to get innovated out of a job or don’t know how to design good software; technologists who don’t have the time to implement a change or don’t have the motivation to do the job right. With all those potential bottlenecks it’s easy to see why so many well funded institutions have such crappy software and archaic workflows.
When people try to improve institutions, they are often trying to improve workflows so more can get done with less time and resources. Unfortunately, the people who actually know what changes need to be made are rarely in a position to control the architecture of the databases they use to get things done.
With DIY databases, people within institutions can circumvent all these bottlenecks simply by making superior systems themselves. This can change a lot more than simply the type of information people have access to – it allows them to explore news ways of being productive. What they’ll inevitably discover, particularly if they’re in an institution that spends a lot of time managing information, is that they can do a better job managing information than many of their bosses.
DIY databases are enabling the type of horizontal and bottom-up innovation essential not just for better functioning institutions, but also more democratic ones. Databases are the “means of production” for many information workers. When they can build and own their own ones, they’ll be able to achieve more ownership of their own work and take another big step towards being able to manage themselves.
Of course, as technology improves and creating you own databases becomes easy, the hard part will certainly become getting peers to use them. That’s a topic for another day.
The International Association of Emergency Management (IAEM) Conference was described to me as the Oscars of Emergency Management field. The event took place in the Paris Hotel in Las Vega November 14th. It was three days after the Paris attacks. Walking under the hotels faux Eiffel Tower and through its simulated Parisian streets was uncomfortable.
Nevertheless, the event was quite informative. Right before my presentation was a session about building your own emergency operations center (EOC) with inexpensive off the shelf tools and another one by the head of St Louis’s Office of Emergency Management explaining how he managed the reaction to the killing of Michael Brown.
My presentation wasn’t as well attended as I had hoped. Maybe the title wasn’t compelling. But it went well. The audience was engaged and we had a good back and forth. Unfortunately, due to technical problems, my sessions wasn’t recorded like all the others. I would have really liked to have seen that video. Instead, at the request of the IAEM, I recorded my presentation via Hangout Live. You can see that video here.
This presentation is the most well rounded of them all. It gives a solid overview of the four facets of open aid:
Grassroots Disaster Relief Networks
Volunteer Technical Communities
At the end it offers a diagram for how we build an integrated information management ecosystem cycling information from local community groups through municipal, state and federal agencies and channel resources effectively.
In the wake of Superstorm Sandy, many residents of New York City were left struggling.
Though a broad array of supportive services were available to survivors — from home rebuilding funds to mental health treatment — it’s often hard for people to know what’s available and how to access it. New York City lacks any kind of centralized system of information about non-profit health and human services. Given the centrality of non-profit organizations in disaster relief and recovery in the United States, this information scarcity means that for many NYC residents recovery from Sandy never quite happened.
As in any federally-declared emergency scenario, every officially-designated disaster case management program was mandated to use the same information system — the Coordinated Access Network (CAN.org) — to manage survivors’ access to benefits and other steps along the path to recovery. CAN has its own resource directory system, but it is proprietary and not available to the public; survivors often need to make a phone call to a case manager to get even the most basic information about the services. In conversations with those case managers who have had the privilege of being able to access this resource, we’ve heard that its interface is confusing and its data is often duplicated and outdated.
As a result, most disaster case management agencies ended up managing their own resource directories — in isolation from each other. Some organizations were able to cobble together relatively comprehensive service directories, but others don’t have any, and rely on individual case managers to solve the problem themselves. Now, just a bit over two years after the storm, the funding for these disaster case management programs is coming to a close — and so the local, personal knowledge about Sandy recovery services held by these social workers will disappear.
The data in our directory comes from a hodgepodge of sources: nonprofit websites, PDF printout, shared spreadsheets created by long term recovery group members, and .CSVs produced by individual case managers passionate about sharing resources. Initially, we used Google Spreadsheet and Fusion tables to manage all of this.
With the introduction of the Human Services Data Specification (HSDS), through the Open Referral initiative, we’re now able to manage this information using a standardized, well documented format that others can also use and share. And that’s precisely what we try to encourage others to do.
Openly accessible, standardized human service directory data is critical for each of the phases of a disaster. For disaster preparedness, service information can help identify gaps in the allocation of resources that communities might need during a disaster. For disaster response, many different kinds of organizations and service providers need simultaneous access to the same information. For disaster recovery, survivors need an array of services to get back on their feet, and they should be able to find this information in a variety of ways.
With the Ohana API, we can glimpse a world in which all of the needs above can be met. So we’ve deployed a demonstration implementation of Ohana at http://services.nycprepared.org. In Ohana, we now have a lightweight admin interface for organizing our data and a front-end application to serve it to the public in a beautiful and mobile friendly way. Since Ohana is an API, other developers can use it to make whatever interfaces they please.
While we’re quite impressed with the Ohana product, its out-of-the-box web search interface won’t meet everyone’s needs. The system that we’d most like to use would be our open source disaster management software called Sahana. Sahana is the world’s leading open source resource management software and we want to build a component — available to any community — that will enable it to consume, produce and deliver HSDS-compatible resource directory data.
By making it possible for any agency using Sahana-based systems to consume and publish resource directory data in the Open Referral format, we can shift the entire field of relief and recovery agencies towards more interoperable, sustainable, and reliable practices. Sahana specialists are ready to develop this open source, HSDS-compatible resource directory component — at an estimated cost of $5,000. Please consider donating to our effort. And please reach out to Sarapis if you know of other communities and use cases in which this technology could enhance resilience in the face of crisis.