Protocol Risk Framework
This document describes the risk framework of the Mars Protocol, which serves two main purposes.
To qualitatively assess the riskiness of protocols and assets to be added to the platform (including the Red Bank and Mars credit accounts).
To determine the risk parameters for assets to be added to Mars. Initially, this framework will cover two types of assets: single asset tokens and LP tokens.
The proposed framework enables different assets and LP tokens to be objectively compared and assessed by standardizing how risk is measured and parameters are determined. This framework will be explored in detail in the following sections. The first section will explore the qualitative assessment process to whitelist protocols and assets to be used in the Red Bank or through Mars credit accounts. The second section will describe the quantitative process to assess the riskiness of single assets. Finally, in the last section the methodology to determine the risk parameters for single assets and LP tokens is covered.
The Mars Risk Framework is intended to be a non-binding resource to guide deliberation by Mars governance. The ultimate decision of whether to whitelist a protocol or asset rests solely in the hands of the Martian Council.
1. DeFi Protocol and Assets Whitelisting Process
The following criteria are suggested as an initial filter for whitelisting protocols and single assets within Mars. Assets or protocols that don’t meet the minimum requirements described below should not be incorporated into the protocol.
Integrating a certain protocol
Technical risk:
Time Since Launch
-
> 1 Year
Custom Public Audit
At least 1 custom audit made public
Several custom audits performed by high quality auditors made public
Recent Audit
Audit within the last 12 months or no code changes since last audit
Quality of Smart Contracts
Well documented. Extensive test coverage.
Written with best practices. Very well documented. Extensive test coverage. Easy to read. Simple logic. Battle tested code.
No Ciritical Vulnerabilities
No vulnerabilities have been exploited
No vulnerabilities have been exploited
Bug Bounty Program
Bug Bounty program
Bug Bounty program (min. 0.25% of TVL)
Centralization risk:
Owner* Decentralization * Owner account call privileged functions
Multisig with majority of the signers being reputable, accountable, not otherwise affiliated with one another and incentive-aligned with relevant stakeholders
No owner - Controlled by DAO with safe processes in place
Admin** Decentralization ** Admin account can upgrade contracts
Multisig with majority of the signers being reputable, accountable, not otherwise affiliated with one another and incentive-aligned with relevant stakeholders
No admin keys - Controlled by DAO with safe processes in place
Other permissioned addresses
Multisig with majority of the signers being reputable, accountable, not otherwise affiliated with one another and incentive-aligned with relevant stakeholders
No admin keys - Controlled by DAO with safe processes in place
Note: assets and protocols controlled by unaccountable, affiliated, and/or centralized entities should not be accepted into Mars.
Enabling assets as collateral and/or for other interactions
Oracle risk:
Oracle Risk
Robust oracle, understood as:
Costly to manipulate: the oracle should be costly to manipulate given the potential profit of the attack.
Accurate: the price reported by the oracle should be as close as possible to the real spot price of the asset.
Decentralized: the methodology for determining the price is well understood and transparent. No single entity has control over the process and/or the outcome
Note: given the oracle's critical importance for the platform, assets without robust oracles should not be accepted into Mars.
Technical risk:
Time Since Launch
-
> 1 Year
Custom Public Audit
At least 1 custom audit made public
Several custom audits performed by high quality auditors made public
Recent Audit
Audit within the last 12 months or no code changes since last audit
Quality of Smart Contracts
Well documented. Extensive test coverage.
Written with best practices. Very well documented. Extensive test coverage. Easy to read. Simple logic. Battle tested code.
No Ciritical Vulnerabilities
No vulnerabilities have been exploited
No vulnerabilities have been exploited
Bug Bounty Program
Bug Bounty program
Bug Bounty program (min. 0.25% of TVL)
Centralization risk:
Owner* Decentralization * Owner account call privileged functions
Multisig with majority of the signers being reputable, accountable, not otherwise affiliated with one another and incentive-aligned with relevant stakeholders
No owner - Controlled by DAO with safe processes in place
Admin** Decentralization ** Admin account can upgrade contracts
Multisig with majority of the signers being reputable, accountable, not otherwise affiliated with one another and incentive-aligned with relevant stakeholders
No admin keys - Controlled by DAO with safe processes in place
Other permissioned addresses
Multisig with majority of the signers being reputable, accountable, not otherwise affiliated with one another and incentive-aligned with relevant stakeholders
No admin keys - Controlled by DAO with safe processes in place
Note: for bridged assets, both the bridge itself and the token should pass the minimum requirements. For LP tokens, both the DEX and the token should pass the minimum requirements. This applies to the technical and centralization risk requirements.
2. Single Assets Scoring Methodology
For whitelisted assets, market and liquidity risks are assessed to calculate an asset’s score, which is then used as part of the process to determine the asset’s risk parameters. In this section, we will explore how that score is calculated and how it is used within the overall risk parameters methodology.
Risk Metrics
The scoring methodology evaluates assets in two broad categories: market risk and liquidity risk. Market risk is related to the volatility of the asset and extreme changes in its price, whereas liquidity risk is related to the ability to liquidate the asset. Specifically, assets are scored using the following metrics:
Daily 95% Conditional Value-at-Risk (CVaR, 365-days): an average of the “extreme” losses in the tail of the distribution of asset returns beyond the value at risk (VaR) cutoff point defined over the past 365 days.
Maximum intraday drawdown (90-day): maximum price change (from high to low) in a trading day over the last 90 days.
Median 24hr volume (365-day, logarithm): median 24hr volume over the last 365 days.
Median 24hr market capitalization (7-day average, 90-days, logarithm): median 7-day average 24hr market capitalization over the last 90 days.
Average high-low percent quoted spread (30-day): daily bid-ask spread proxy (see Appendix A for details).
Amihud’s illiquidity measure (90-day, logarithm): daily cost-per-dollar-volume proxy (see Appendix A for details).
The following subsections describe the methodology used to score tokens in each of these metrics.
Input Data
The input data for the scoring methodology corresponds to the daily historical data of asset prices, trading volume, and market capitalization over the past year (365 calendar days) from the reference date. In addition, daily high-low-open-close data is used over the past 30 calendar days and +-2% depth as of the reference date. All data is sourced from Coingecko. Coingecko was used for practical reasons and this should not imply an affiliation or endorsement of the brand.
The selection of the assets for the scoring methodology is as follows:
The list of available assets is sourced from Coingecko.
The top 1,000 assets by market capitalization are selected.
All assets that do not have a trading history of at least 90 calendar days prior to the reference date are removed from the set.
Scoring Methodology
The scoring methodology consists of three steps:
A score is determined for each risk metric separately.
Aggregation is performed by using simple averaging of scores.
Lastly, the final asset’s score is used to define the asset’s quality category.
Let’s explore each step below.
Step 1. Defining a score for an individual metric
Firstly, all metrics for each asset are transformed into a score from 0 to 100 to standardize the scoring methodology and compare different assets more efficiently. For this transformation (from metric to 0-100 score), a linear scaling method (min-max normalization) is used. The normalized values represent individual metric scores assigned to each asset. The detailed procedure is the following:
Let denote the initial value of the -th metric of the -th asset, is a normalized value of ; are the minimum and the maximum values of the -th metric over all assets.
For positive metrics (i.e., those that increase with improving asset quality), the normalization is performed as follows:
whereas if a metric increase has a negative effect on the final rating the formula becomes
Using the above formulas each ith asset is described by a vector of normalized parameters
As a result of this step, the values of all metrics are brought to the same scale in such a way that all metrics positively impact the asset quality (the closer the score to 100 the better the asset quality in regards to that metric).
Step 2. Aggregation of metrics
Once scores for all metrics are calculated, they are averaged to get the asset’s final score.
Example:
X
90
82
47
60
70
80
Let asset X have the following scores:
Then the total score is:
Step 3. Binning
The range of final score values is divided into sub-ranges to provide five quality categories: “very good”; “good”; “medium”; “bad”; “very bad”. The following binning procedure is applied to define the category intervals.
The upper expectation (ceiling) for the final score is set to be 80, and the lower expectation (floor) is the 10th percentile of the distribution of score values. Assets with a final score above the ceiling are assigned a category “very good.” while those with a final score below the floor are assigned a category “very bad.”
The intervals between ceiling and floor are defined based on the equal width binning procedure. The width of each interval is:
where equals , is the number of categories. The corresponding interval boundaries are:
Each asset is assigned a quality category depending on the interval in which the value of the final score falls, according to the following table:
Very good
[80; 100]
Good
[68; 80]
Medium
[56; 68]
Bad
[43, 56]
Very bad
[0; 43]
Table 1. Bins for quality categories
These categories are used to define the maximum allowable LTVs within each category and risk horizons for the LTV calculation methodology (see Section 3).
The procedure for determining the final score is similar for new assets (not included in the initial data sample). Firstly, all metrics are calculated based on historical data and normalized using the minimum and maximum values previously defined from the sample (see the table 2 below). If the metric value exceeds the maximum or below the minimum, it is assumed to be equal to the maximum or minimum value, respectively. Finally, the obtained scores are averaged to get the total score.
Daily CvaR (95%), %
-0.3
24.4
Median intraday drawdown
0.3
47.9
Median 7-day average market cap, log $
10,3
26.7
Mean high-low percentage quoted spread, %
0.1
9.2
Amihud's illuquidity, log
0.1
31.3
Table 2. Min and max values used for metrics normalization
3. Risk Parameters
This section describes the approach to determine the risk parameters for whitelisted assets and LP tokens, namely Liquidation LTV, Maximum LTV, and Margin of Safety.
3.1 Definitions
Liquidation LTV is the LTV at which a position is defined as undercollateralized and the user becomes liquidatable.
Maximum LTV (≤ Liquidation LTV) is the maximum LTV allowed when the user is opening or adjusting a position.
Margin of Safety is the difference between the Liquidation LTV and Max. LTV. It’s a safety cushion to avoid users becoming liquidatable immediately after opening a position at the maximum allowed LTV.
The derivation of Liquidation LTV is described in Section 3.2. The approach to define Margin of Safety and corresponding Maximum LTV is provided in Section 3.3. Lastly, in Section 3.4 the methodology to determine these risk parameters for LP tokens is explored.
3.2 Liquidation LTV for Single Asset Tokens
Liquidation LTV is defined for each token as follows:
where
The haircut intends to capture the percentage potential loss of value of a given asset after it becomes liquidatable due to the following factors:
Market risk component – risk related to possible extreme price movements.
Liquidity risk component – the cost of liquidating a position following a liquidation event due to the impact of the liquidation on the market price.
A detailed description of the calculation approach for both components is given in the following subsections. The greater the asset's market and/or liquidity risk, the higher the corresponding haircut and the lower the LTV.
The serves to limit the LTV obtained from the historical data to ensure conservatism and depends on the token’s quality as defined in Section 2 (see Table 3 below).
Market Risk Component
The market risk component is an LTV adjustment defined such that the expected likelihood* of the price of the asset dropping more than the market risk component is 1%. Hence the market risk component is defined as the 99% conditional value-at-risk with a given risk horizon (average of the “extreme” returns in the left tail of the asset returns distribution, beyond the value at risk cutoff point):
where is defined by using the historical-simulation approach, which implies that the probability distribution and corresponding tail-risk are estimated empirically from the observed price movements (percentage asset returns) over the past 365 days from the reference date.
It’s important to highlight that this methodology only provides an estimate of extreme future price trajectories of a given asset. This estimate is based on past price performance and, as such, it should be understood as a backward-looking, fallible (though valuable) tool for predicting future prices.
The risk horizon represents the longest period required for a position to be liquidated. It is assumed to depend on the asset category (see table 3 below) and varies from 1 to 5 days. The riskier the asset, the longer the period used to calculate relative price movements and quantify corresponding tail-risk (CVaR). Usually, liquidations within DeFi happen within 1 day. However, the minimum 1-day horizon is used to ensure conservatism as the historical simulation approach is backward-looking and may not cover all potential movements of the asset price in the future.
Note: estimating the distribution quantile can be inaccurate for assets with short histories. To take this into account, we use quantiles for those pairs where we have price histories over the past 200-365 days, while for others (90-200 days history), we use the extreme move approach - maximum observed percentage returns over the available historical period.
Liquidity Risk Component
The liquidity risk adjustment is defined per asset using the -2% dollar depth aggregated over different exchanges as of the reference date. It is a cost-per-dollar-volume liquidity risk measure that is interpreted as capital in USD required to move the price by 2% down from the last traded price. The list of exchanges considered includes Uniswap, Osmosis, and top-10 exchanges according to the Coingecko ranking.
The liquidity risk component is calculated for each token as follows:
Here the multiplier\
shows how much % the price will move down subject to $1 volume.
Depth refers to the ability of the market to absorb the sale or exit of a position. A liquidator who liquidates a position of an ordinary user is not likely to impact the asset price. Selling a large block of assets though, can cause the price to fall when the asset is sold. Hence, the Swap Size is set to depend on the asset's deposit cap and is defined conservatively, assuming a medium-size transaction amount that can have a notable impact on the asset price as 1% of the deposit cap.
The risk horizon and for each token’s category are provided in the table below:
Table 3. Risk horizons and LTV caps per asset’s quality category
3.3 Maximum LTV and Margin of Safety for Single Asset Tokens
The Liquidation LTV is defined based on the with the horizon determined by the token’s quality category. The safety margin is defined as the absolute difference between the CVaR calculated at the defined horizon plus 1 day and the :
Thus, the margin of safety is defined by the relative price drop incurred in one additional day in relation to the horizon used to determine the Liquidation LTV.
The higher the price volatility of the token, the more the price can fall with an increase in the risk horizon by 1 day, and therefore the higher the safety margin and the lower the corresponding Maximum LTV will be.
The following caps are applied to the margin of safety derived from the historical data depending on the token’s quality category:
Table 4. Margin of Safety caps per token’s quality category
The minimum margin of safety is set to be 0.005.
The Maximum LTV is defined based on an absolute adjustment applied to the Liquidation LTV as follows:
3.4 Risk Parameters for LP tokens
The Liquidation LTV of LP tokens is calculated as the average of the Liquidation LTVs of the individual assets that compose the LP token, adjusted for IL risks:
where are the Liquidation LTVs of the assets that compose the LP token (assuming a 50/50 LP token). The IL Risk Adjustment is intended to capture the additional impermanent loss risks associated with holding an LP token (vs. a 50/50 portfolio of the individual assets).
The IL risk is defined as follows:
where Expected .
Assuming a 50%/50% LP token and constant product AMM, IL is calculated as follows:
where , and are simple assets’ returns calculated over the unit of time and is the pool price. Note that the above IL formula excludes trading fees and other rewards LP token holders may accrue. This is intentional and is done to calculate a worse-case IL, which ultimately generates a more conservative LTV for the LP token.
From the above formula, historical ILs for each pool are calculated by using the 10-day overlapping asset returns over the past 365 days.
Then the expected IL is estimated by using the historical simulation value-at-risk approach. Based on the empirical distribution of ILs, value-at-risk measures the loss value that will not be exceeded with a probability of 95% over a 10-day risk horizon:
The risk horizon represents the longest period needed for liquidation (the longest period over which the protocol can be exposed to the position, i.e., liquidating the LP token).
The risk horizon is set to 10-days to ensure conservatism.
Note: Estimating the distribution quantile can be inaccurate for assets with short histories. To take this into account, we use quantiles for those pairs where we have price history over the past 200-365 days, while for others (90-200 days history), we use the extreme move approach - maximum observed ILs over the available historical period.
The applied IL risk adjustment allows to capture the size of the loss (risk exposure) within the chosen risk horizon and the associated probability level (the likelihood of a loss of a given magnitude (with a 5% chance of exceeding it)). Using a historical simulation method to estimate value-at-risk captures any empirical dependencies (including non-linear ones) between the two assets in the pool.
The Margin of Safety and the Maximum LTV for LP token are defined as follows:
where are the Margins of Safety of the assets that compose the LP token.
4. Model Usage, Monitoring and Update
Approaches used in this framework to define risk parameters are primarily data-driven; hence the obtained risk parameters are expected to change in response to varying market conditions. Therefore, obtained LTVs can be revised regularly (i.e. once every six months or urgently in case of severe market stress) to avoid stale risk parameters.
When updating the model, the historical data period can also be revised to ensure that the sample covers at least one period of acute stress (e.g., May 2022).
Additionally, the sanity of the final outputs of the model should be checked by the community such that final parameter adjustments are made when considered necessary.
5. Disclaimers
Please Note: The proposed risk framework is not intended as financial advice and should not be relied upon; rather, it is being made available for educational purposes as a starting point for each person’s own independent review and analysis of risks. The risk framework could fail to account for certain risks or could fail to adequately weigh the risks that it does account for. Use at your own risk. No representation, warranty, guaranty, indemnity or assumption of risk is being provided hereby. This article is subject to and qualified by the Mars Disclaimers/Disclosures.
Appendix A. Description of risk metrics
Amihud’s illiquidity measure
The metric is calculated as follows:
where is the daily asset return at day and is the daily dollar trading volume at .
In cases where significant price changes with small trading volume exist, this ratio increases, which means that the illiquidity of the stock increases. Everything else being equal, a higher trading volume will lead to a lower Amihud illiquidity measure.
This metric is considered a proxy for market depth liquidity measure.
High-low percentage quoted spread
The metric is calculated as follows:
where is the highest daily price, is the lowest daily price, and is the mid-price. The full spread represents the cost of buying and selling the asset today. As we are only interested in the cost of selling, the cost of liquidity is only half of the spread.
This metric represents a proxy for the bid-ask spread, widely used as a market width measure.
Last updated