Introduction: Awash River Wflow Tutorial

Wflow is used for modelling hydrological processes. It allows users to account for precipitation, interception, snow accumulation and melt, evapotranspiration, soil water, surface water and groundwater recharge in a fully distributed environment. wflow is applied to analyse flood hazards, drought, climate change impacts and land use changes worldwide. Wflow is an open source distributed hydrological modelling platform. It is targeted to perform hydrological simulations by using raster data. Wflow is a completely distributed modelling platform which maximizes the use of satellite data. The wflow modelling platform is a completely gridded, hydrological model which calculates all hydrological fluxes at any given point in the model at a given time step, based on physical parameters and meteorological input data.

Step 1: Getting Started

The first step to make the process as easy as possible is downloading the right programms. To effectifly use Wflow the recommended programms are:

* Notepad++

Notepad++ is used to edit .ini files needed to run Wflow. Here you can make changes to your in- and output for the model.

*QGIS

QGIS is used to make Raster data, which is used for input in Wflow and visualise the output data.

* Panoply

Panoply is a programm that has a similar function as QGIS for reading the output data. However, Panoply is easier to use and makes the output data easier to read.

* Excel

Excel is used for reading output data and processing this data into graphs.

Step 2: Making Use of the Supplied Data

Deltares or another institute might have supplied data that can be used for running Wflow. Otherwise the data is available on the internet (Open source).

The first batch of data which will be used is the measured data from the grdc-stations. This data is supplied as a .cmd file, which need to be converted to Excel data to make use of it. This can be done by following the steps shown below:

* Open Excel

* Open the folder in which you saved the data and select 'show all files' (See figure 1.)

* Open the .cmd files

* Follow the steps in the PDF below.

Step 3: Analysing Excel Data

Now that you have successfully converted the text files with the data to an Excel document, you can start analyzing the data. To do this the data must be generalized and errors need to be erased. But first, all the data should be added to a singular excel sheet. You can arrange it the way you want but the first column should consists of all the dates between which the measurements have been performed. All of the data from the meteo stations must be copy and pasted in a column next to their respective dates. An example of the end-result can be seen in figure 2.

Secondly the data needs to be generalized and errors need to be erased. Examine the data closely and filter out any inconsistencies. After this step has been carried out you can make graphs of the data. An example of a graph can be seen in figure 3.

The next step is to use the data. Using the following equation, you can calculate the amount of storage in the river basin:
Q = D * S
Q = River discharge [m³/s]
D = Reservoir coefficient [s^-1]
S = Storage [m³]


The D factor is a coefficient that represents the residence time of water in the river system. This factor needs to be calculated separately. This can be done using the following equation:
Q = Q0 * exp(-D * t)
Q = River discharge [m³/s]
Q0 = River discharge at t=0 [m³/s]
D = Reservoir coefficient [s^-1]
t = Time [s]

In order to use this equation you need to first choose a singular peak in one of the graphs made in figure 3. After isolating it you need to calculate the value for the D-factor at every timestep (one day) until the graph has stopped with descending. Using the following equation this can be done via excel:
D = -Ln(Q/Q0) / t

The next (and last) step is to calculate the average D-factor so you can use it for the first equation to calculate the storage. Repeat all of these steps for multiple peaks. After having calculated several D-factors another average can be calculated which can be used to calculate the storage across the years at a specific catchment.

Step 4: Exploring the Wflow Map Structure

Download the Wflow-zip by following this link. After unzipping the file go to folder: 'MelkaKuntire' , is this map you can findthe following maps:

* inmaps - This map contains the forcing-data. This data contains the temperature, precipitation and evaporation. This is the basis of the model.

* data - This map contains the catchment area, which is used as the area wflow models.

* instate – You have to create this map yourself in the “MelkaKuntire” map. This map is used to run a “warm run”. A warm run is a run where data you have previously run before. The data from the first run should be copied from the “Outstate” map to the newly made “Instate” map. Don’t forget to edit the sbm.ini file to “reinit = 0” (See figure 4).

* intbl - This map contains different tables with data that wflow uses in the model.

* setup - This map is supplied for the case, you do not need to change anything here. The setup map is used to make new cases.

* staticmaps - This map contains multiple .map files. These files are used by wflow in the model. You can open these files in QGIS to see what the map looks like.

*wflow_sbm - This .ini file is used to give in- and output for the model. An example of the input you can change is the run time (See figure 4)

Step 5: Running the Model

* The above video is a small tutorial for the following steps.

* The first step in running the model is opening the wflow_sbm.ini file with Notepad ++. Here you can edit the starting and end dates. In this example (see figure 5) we will run 6 years (1990 - 1996). Save the .ini file. The data is available for 18 years, so it is possible to make a longer run.

* Go to the wflow map (figure 6). There is a*.bat file: "run-hydrology-MelkaKuntire.bat" This bat file is used to run the hydrology model, run the model by double clicking the “run-hydrology-MelkaKuntire.bat” (See figure 10)

* At the end of the run, go to the MelkaKuntire folder (figure 7). A new folder has appeared: SBM. This folder contains the outputs of the run. Open this new SBM folder. The dynamic model outputs have been saved in the wflow_outputs.nc file. The other outputs are extra static maps or statistics saved by the model.

Step 6: Exploring the Model Outputs

* To use the data wflow has generated open the run.csv file in Excel using the method explained in step 2.

* To figure out which point you have to use as the output, you will need to use QGIS.

* Open QGIS and load the following files: "wflow_gauges.map" in the staticmaps folder and "Stations_Awash_river.shp" in the Export_Data map. The gauge number that is closest to the measuring station can be found by clicking on the nearest point with the "Identify Features Tool" (Blue circle, figure 8) in the wflow_gauges.map layer (see figure 8). The green arrow indicates the closest point to the measuring station (green dot), in the right red circle the number of the gauges can be found.

* The number of the gauge can be found in the run.csv file, the data is found below in timesteps of a day (see figure 9).

* Analyse this data like you did in step 3 and compare it to the measured data (See figure 11).

Step 7: Calibrating the Model

In figure 11 (step 6) you can see that the measured data and the output data from the wflow model do not show the same graphs. You can see that the wflow output has a much higher runoff than that is measured. Now you have to start callibrating the model by changing different parameters. To do this follow the steps below:

* Open run-hydrology-parameter-changes-MelkaKuntire.bat with Notepad ++

* You can now see the line of code that shows which map wflow takes input data from (-c MelkaKuntire), the name of the map that wflow creates for the output data (-r run_RootingDepth_1), the name of the .ini file that wflow uses as input (-c wflow_sbm.ini) and the parameters that you can change (-P "self.RootingDepth = self.RootingDepth * 1"). If you change parameters do not forget to put self. in front of the parameter you want to change. The parameters that got changed in this example is the Rooting Depth (see figure 12).

Other parameters you can change can be found in the intbl and statisticmaps. You can change multiple parameters at the same time (See figure 12).

* The parameters all have different impacts on the output, by changing different parameters you can see which have the most impact.

* In the example the Rooting Depth and KsatHorfrac have the most impact. The factor change of the Rooting Depth is approx. 2-3 and the KsatHorfrac is approx. 100-1000

* After changing the parameters run the run-hydrology-parameter-changes-MelkaKuntire.bat again and compare the new results with the previous results.

* The goal is to get the simulated results as close to the measured data as possible.

Step 8: Using the Model to Calculate Climate Change Inpacts

Now that the model is calibrated we can use it to calculate the impact of climate change. You can calculate this by changing the Forcing data that is used. This data will be supplied by Deltares or other institutes. In this example a rise in temperature of 1.4 degrees Celcius is used. To change the used Forcing data follow these steps:

* Save the new Forcing data in a new map (See figure 13)

* Change the netcdfinput to the new map name and the file name of the Forcing data (See figure 14) in the wflow_sbm.ini file.

* Run the run-hydrology-MelkaKuntire.bat and compare the new data with the precious data.