Economy

Economic activities in Zürich

Zürich Statistical Office collects data on the city and its residents. This data is published as Linked Data.

In this tutorial, we will show how to work with Linked Data. Mainly, we will see how to work with data on economic activities.
We will look into how to query, process, and visualize it.

SPARQL endpoint

Data on some economic activities is published as Linked Data. It can be accessed with SPARQL queries.
You can send queries using HTTP requests. The API endpoint is https://ld.stadt-zuerich.ch/query.

Let's use SparqlClient from graphly to communicate with the database. Graphly will allow us to:

  • send SPARQL queries
  • automatically add prefixes to all queries
  • format response to pandas or geopandas
In [1]:
# Uncomment to install dependencies in Colab environment
#!pip install mapclassify
#!pip install git+https://github.com/zazuko/graphly.git
In [2]:
import mapclassify
import matplotlib
import matplotlib.cm

import pandas as pd
import plotly.express as px
import plotly.graph_objects as go

from graphly.api_client import SparqlClient
In [3]:
sparql = SparqlClient("https://ld.stadt-zuerich.ch/query")
wikisparql = SparqlClient("https://query.wikidata.org/sparql")

sparql.add_prefixes({
    "schema": "<http://schema.org/>",
    "cube": "<https://cube.link/>",
    "property": "<https://ld.stadt-zuerich.ch/statistics/property/>",
    "measure": "<https://ld.stadt-zuerich.ch/statistics/measure/>",
    "skos": "<http://www.w3.org/2004/02/skos/core#>",
    "ssz": "<https://ld.stadt-zuerich.ch/statistics/>"
})

SPARQL queries can become very long. To improve the readibility, we will work wih prefixes.

Using add_prefixes method, we can define persistent prefixes. Every time you send a query, graphly will now automatically add the prefixes for you.

Restaurants over time

Let's find the number of restaurants in Zurich over time. This information is available in the AST-BTA data cube. To give restaurant numbers a context, let's scale them by population size. The number of inhabitants over time can be found in the BEW data cube.

The query for number of inhabitants and restaurants over time is as follows:

In [4]:
query = """
SELECT *
FROM <https://lindas.admin.ch/stadtzuerich/stat>
WHERE {
    {
    SELECT ?time (SUM(?ast) AS ?restaurants)
    WHERE {
      ssz:AST-BTA a cube:Cube;
                    cube:observationSet/cube:observation ?obs_rest.   
      ?obs_rest property:TIME ?time ;     
           property:RAUM <https://ld.stadt-zuerich.ch/statistics/code/R30000> ;
           property:BTA <https://ld.stadt-zuerich.ch/statistics/code/BTA5000> ;
           measure:AST ?ast . 
    }
     GROUP BY ?time ?place
  }
  {
    SELECT ?time ?pop
    WHERE {
      ssz:BEW a cube:Cube;
                    cube:observationSet/cube:observation ?obs_pop.   
      ?obs_pop property:TIME ?time ;     
           property:RAUM <https://ld.stadt-zuerich.ch/statistics/code/R30000>;
           measure:BEW ?pop
    }
  }  
}
ORDER BY ?time
"""

df = sparql.send_query(query)
df.head()
Out[4]:
time restaurants pop
0 1934-12-31 1328.0 315864.0
1 1935-12-31 1327.0 317157.0
2 1936-12-31 1321.0 317712.0
3 1937-12-31 1321.0 318926.0
4 1938-12-31 1334.0 326979.0

Let's calculate number of restaurants per 10 000 inhabitants

In [5]:
df = df.fillna(method="ffill")
df["Restaurants per 10 000 inhabitants"] = df["restaurants"]/df["pop"]*10000
In [6]:
fig = px.line(df, x="time", y = "Restaurants per 10 000 inhabitants", labels={"time": "Years"})
fig.update_layout(title_text='Restaurants in Zürich over time', title_x=0.5)

Restaurants in city quartiers

Let's find the number of restaurants in different parts of the city. The data on restaurants is available in the AST-BTA data cube. To show the quartiers on a map, we will need their geographic coordinates. This data is available in Wikidata. We will get the number of restaurants per district from our endpoint, and the quartier centroid from Wikidata.

Both information can be obtained using a SPARQL federated query. The endpoint for Wikidata is <https://query.wikidata.org/sparql>.

The query for quartiers, and number of restaurants is:

In [7]:
query = """
PREFIX p: <http://www.wikidata.org/prop/>
PREFIX ps: <http://www.wikidata.org/prop/statement/>

SELECT ?place ?wikidata_iri (SUM(?ast) AS ?restaurants)
WHERE {
  
  ssz:AST-BTA a cube:Cube;
      cube:observationSet/cube:observation ?obs.   
      
  ?obs property:TIME ?time ;     
       property:RAUM ?place_uri ;
       property:BTA/schema:name ?bta ;
                   measure:AST ?ast .

  ?place_uri skos:inScheme <https://ld.stadt-zuerich.ch/statistics/scheme/Quartier> ;
             schema:name ?place ;
             schema:sameAs ?wikidata_id .
  
  FILTER (?time = "2017-12-31"^^xsd:date)
  
  BIND(IRI(?wikidata_id ) AS ?wikidata_iri ) .
  
  FILTER (?bta = "Verpflegungsbetriebe")
  
}
GROUP BY ?place ?wikidata_iri ?time
"""

restaurants = sparql.send_query(query)
restaurants.head()
Out[7]:
place wikidata_iri restaurants
0 Höngg http://www.wikidata.org/entity/Q455496 38.0
1 Mühlebach http://www.wikidata.org/entity/Q693397 40.0
2 Oberstrass http://www.wikidata.org/entity/Q693483 31.0
3 Unterstrass http://www.wikidata.org/entity/Q656446 75.0
4 Witikon http://www.wikidata.org/entity/Q392079 11.0

The query for quartiers' centroids is:

In [8]:
query = """
SELECT * WHERE {{
  ?wikidata_iri wdt:P31 wd:Q19644586;       # All objects being "statistical neighbourhoods of Zurich"
                p:P625/ps:P625 ?geometry.   # Their coordinates

  FILTER(?wikidata_iri IN({}))
}}
""".format("<" + ">,<".join(restaurants.wikidata_iri) + ">")

geometries = wikisparql.send_query(query)
geometries.head()
/opt/hostedtoolcache/Python/3.9.13/x64/lib/python3.9/site-packages/pandas/core/dtypes/cast.py:122: ShapelyDeprecationWarning:

The array interface is deprecated and will no longer work in Shapely 2.0. Convert the '.coords' to a numpy array instead.

Out[8]:
wikidata_iri geometry
0 http://www.wikidata.org/entity/Q276792 POINT (8.54783 47.41940)
1 http://www.wikidata.org/entity/Q392079 POINT (8.58333 47.36667)
2 http://www.wikidata.org/entity/Q531899 POINT (8.52815 47.37120)
3 http://www.wikidata.org/entity/Q652455 POINT (8.56408 47.41019)
4 http://www.wikidata.org/entity/Q642353 POINT (8.53011 47.34394)

By joining restaurants and geometries, we get:

In [9]:
df = pd.merge(geometries, restaurants, how="inner", on="wikidata_iri")
df.head()
Out[9]:
wikidata_iri geometry place restaurants
0 http://www.wikidata.org/entity/Q276792 POINT (8.54783 47.41940) Seebach 84.0
1 http://www.wikidata.org/entity/Q392079 POINT (8.58333 47.36667) Witikon 11.0
2 http://www.wikidata.org/entity/Q531899 POINT (8.52815 47.37120) Werd 58.0
3 http://www.wikidata.org/entity/Q652455 POINT (8.56408 47.41019) Saatlen 10.0
4 http://www.wikidata.org/entity/Q642353 POINT (8.53011 47.34394) Wollishofen 43.0

Let's classify the number of restaurants into 5 different buckets. We can use the mapclassify library to assign values in the restaurant column into one of five categories.

In [10]:
N_CATEGORIES = 5
df["text"] = df.place + "<br>Restaurants: " + df.restaurants.astype(int).astype(str)
classifier = mapclassify.NaturalBreaks(y=df["restaurants"], k=N_CATEGORIES)
df["rest_buckets"] = df[["restaurants"]].apply(classifier) 

Classified values can be easily visualized on the map.

In [11]:
norm = matplotlib.colors.Normalize(vmin=0, vmax=N_CATEGORIES)
colormap = matplotlib.cm.ScalarMappable(norm=norm, cmap=matplotlib.cm.viridis)
labels = mapclassify.classifiers._get_mpl_labels(classifier, fmt="{:.0f}")

fig = go.Figure()

for bucket in range(N_CATEGORIES):

    subset = df[df.rest_buckets == bucket]
    fig.add_trace(go.Scattermapbox(
        mode="markers",
        lat=subset.geometry.y,
        lon=subset.geometry.x,
        hovertext = subset.text,
        hoverinfo = "text",
        name=labels[bucket],
        marker={'size': ((subset.restaurants)**1.5)*0.6, "sizemode": "area", "sizemin": 4, "color": "rgba{}".format(colormap.to_rgba(bucket+1))}, 
    ))

fig.update_layout(
    margin={'l': 0, 't': 50, 'b': 0, 'r': 0},
    mapbox={
        'center': {"lat": 47.3815, "lon": 8.532},
        'style': "carto-darkmatter",
        'zoom': 11},
    showlegend=True,
    legend_title="Restaurants count",
    title_text='Restaurants in Zürich Quartiers', 
    title_x=0.5
)

fig.show("notebook")

After-school care: gender-representation

Let's take a look at gender representation in the public sector. In the BES-BTA-SEX data cube we can find information on the number of employees in different organizations. The data is reported separately for each sex, and various establishment types. Let's find the number of male and female employees in after-school care (Hort).

The query for the number of female and male employees in after-school care over time looks as follows:

In [12]:
query = """
SELECT ?time ?employees ?sex
FROM <https://lindas.admin.ch/stadtzuerich/stat>
WHERE {
    ssz:BES-BTA-SEX a cube:Cube;
                cube:observationSet/cube:observation ?obs.   
    ?obs property:TIME ?time ;     
        property:RAUM/skos:inScheme <https://ld.stadt-zuerich.ch/statistics/scheme/Gemeinde> ;
        property:BTA/schema:name "Horte" ;
        property:SEX/schema:name ?sex ;
        measure:BES ?employees .
}
ORDER BY ?time
"""
df = sparql.send_query(query)
df.head()
Out[12]:
time employees sex
0 1966-06-30 1.0 männlich
1 1966-06-30 86.0 weiblich
2 1967-06-30 1.0 männlich
3 1967-06-30 86.0 weiblich
4 1968-06-30 87.0 weiblich

Let's rearrange and rename the columns:

In [13]:
df = pd.pivot_table(df, index="time", columns="sex", values="employees")
df = df.reset_index().rename_axis(None, axis=1)
df = df.rename(columns={"männlich": "male", "weiblich": "female"})
In [14]:
fig = px.histogram(df, x="time", y=df.columns, barnorm="percent", labels={'x':'total_bill', 'y':'count'})
fig.update_layout(
    title='After-school care: gender representation', 
    title_x=0.5,
    yaxis_title="Employees in %",
    xaxis_title="Years"
)
fig['layout']['yaxis']['range'] = [0,100]
fig.show("notebook")