What’s a Rich Text element?
The rich text element allows you to create and format headings, paragraphs, blockquotes, images, and video all in one place instead of having to add and format them individually. Just double-click and easily create content.
Static and dynamic content editing
A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel. Voila!
How to customize formatting for each rich text
Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.
- This is it.
Silico is a business simulation platform that allows you to build a digital twin of your enterprise to monitor your business. With this process model, you are able to forecast your business' future state, and simulate the impact of decisions virtually before implementing them in the real world.
Essential components of your digital twin of the enterprise are the representations of your business processes, such as lead-to-order, order-to-cash, or service processes, which you can build with our Business Process Simulation (BPS) methodology. These digital twins of your process help your business to forecast, for example, capacity requirements and match them to fluctuating demand.
We converted a simple process map into a Silico simulation model in a previous blog post. In this blog post, we will add the quantitative variables to the process twin that determine process flow and connect all aspects mathematically, allowing us to unlock the benefits of Business Process Simulation. These quantitative variables include the arrival rate of orders received per day, determinants of throughput rates like resources and processing times, and branching probabilities. Before we do so, we need to set our model up for simulation.
Setting up the Process Model
So far, we have recreated our process maps in Silico by converting the process map’s activities and tasks into flows, using symbols to branch our process, and by adding stocks. Before we add quantitative elements, we need to set up the simulation model by selecting a timestep and unit of time.
We can access our time options by clicking on “Project” in the top menu bar and then “Project Settings”. We want a model that starts on the 1st of January 2023, and we want to look five years into the future with daily increments. Therefore, we pick “Days” as our unit of time, “01/01/2022” as our start time, and 1825 as our number of steps.
Adding Units to your Process Model
Before we build on our model, we will add units to our model to remind us of what our flows and stocks represent. The stocks in our model reflect the number of orders that are backlogged or queuing at a task. Therefore, they all have the unit “Orders”. The flows represent how many orders were processed at each step or activity per unit of time. Because we have set the unit of time to “Days”, these flows represent the outcome of activities in “Order per Day”. After updating the process maps with units, it now looks like this:
The units will help us to ensure that our equations work as intended and we convert, for example, different time units where required.
We can now set the value of orders we will receive per day. To do so, we click on the flow named “Orders Received”. Let’s type 10 into its equation field for our first model iteration. That means we are receiving ten orders per day.
Silico process simulation models are incredibly expandable, and our digital twins can connect any element of your business. For example, instead of assigning a value to the flow, we could also determine the arrival rate by adding to our model. We could add a marketing model to our existing structure to create new orders. Alternatively, if this part of our model reflects our order-to-cash process, we could add a quote-to-order or lead-to-order model. The possibilities are endless.
Throughput Rates and their Determinants
Receiving ten orders per day, we can see that the first stock – “Reviewing Backlog” – accumulates linearly over time. Starting with zero, meaning no orders are waiting to be processed here, it increases by ten every day because ten new orders arrive at the process while no orders are being processed, reducing this stock. We are now going to create the structure that determines outflow.
Firstly, we will add a variable called “Max Review Rate” to our model. It reflects the maximum number of orders that could be reviewed per day. It influences the “Orders Reviewed” flow with the “Reviewing Backlog”, which it drains. For the flow, we will use an equation that selects the smaller of either the stock or the maximum throughput rate. This ensures that our stock never drops below zero in cases where we allocate more employees to this activity or step than would be required to clear the backlog every day.
Secondly, we can work backwards and consider what determines the maximum throughput rate. A common way to calculate throughput rates is through the resources allocated to a task and the productivity of those resources. In our case, let’s assume that resources are employees that we allocate to reviewing orders. Therefore, we will create a variable called “Order Reviewers”, which we measure in FTEs. The productivity of those employees is reflected in the “Reviews per Day per Employee”. It represents how many orders each employee can process per day. For now, let’s assume we have 5 FTE allocated to reviewing, and each full-time employee can review 16 orders per day.
Thirdly, we can specify the productivity rate a bit better. How many orders one FTE can review per day may be an excellent way to think about this variable for modelling purposes. However, employees executing tasks, their managers, and the data collected through task mining may think about it differently. For example, we might specify the productivity with an additional variable called “Processing Time to Review” that reflects the hours of effort required to process a single order. Using this variable and another one called “Hours per Employee Day”, we can link the processing time in hours to the productivity per day. Here, we can see the strength of units as we must consider how an hourly variable is converted into a daily variable given the working hours of our full-time employees.
With this final step, we have completed the structure determining the number of orders reviewed per day. With the throughput rate and its determinants added to the model, the first flow now has its outflow specified. It no longer accumulates linearly; instead, its value stabilises at 10.
Branching Probabilities in your Process Model
However, “Orders Reviewed” does not yet affect the clean and unclean order flows. We need to allocate the reviewed orders to the two process flow branches to achieve this. To calculate the clean orders, we need to multiply the percentage of clean orders — assumed to be 80% — and the number of reviewed orders. The difference between 100% and the clean order is the percentage of unclean orders. We need to multiply it and the number of reviewed orders to calculate the unclean orders identified during reviewing per day.
With this addition, we can see that the total numbers of clean and unclean orders in “Delivery Backlog” and the “Cleaning Backlog” are now increasing linearly. While the inflow to the two backlogs is specified, their outflows are not yet calculated.
Completing the Process Model
We can now finalise the process model by adding another two sets of structures that drive the outstanding cleaning and delivering flows. We can reuse the same process simulation model structure developed for order reviewing: copy it, paste it, rename the variables appropriately, and set their corresponding values. We can even reuse our “Hours per Employee Day” day variable, giving us this completed model:
You can notice that the is a pattern in process models that always includes a set of core variables:
- The process structure that we have developed in the first part of this blog series
- The maximum throughput rates that could be achieved if there are sufficiently many cases backlogged and queuing
- The resources allocated to a step, such as FTEs
- The productivity rates that specify how many times the tasks can be executed per unit of time of the process model and per available resource
- The processing times that specify how long a task takes, for example, in hours
- Branching probabilities that split a process flow exclusive branches of the process
- Converters that, for example, ensure consistency across different units of time considering, for example, working hours per day
With this set of core variables, you can build most process structures. In our next blog post of this series, we will add business process outcomes to the model. They will measure how well our process performs along a few critical dimensions, including effort per unit and resulting unit costs, cases backlogged, the lead time of new cases, revenues, costs, and profit. These are the outcomes we will try to optimise as we simulate different scenarios and improve the process through transformation initiatives and operational changes.