In the past we always worked with just one random variable at a time. Unfortunately that is not always what we want to do. Although unfortunately doesn’t quite fit here as it actually becomes more interesting when we observe two or more random variables at the same time, it was what I first thought. I was very scared that it’ll become overwhelming complicated with two or more variables but that’s not the case. Sure it becomes more complicated but also more interesting and after a while the complicated things doesn’t seem to be complicated anymore. The key is to try to understand it and not just memorise it. I encourage all readers to not just rely on my blog-posts but also to read texts about the same topic from other sources. It will help to deepen your understanding.

Given that short – or not so short – introduction, let us talk about what we will do today. The topic for today are joint distributions.

## Joint Distributions

**What are joint distributions? **Joint distributions are basically distributions of two or more random variables, like the height and weight of an animal. Joint distributions allow us to compute probabilities involving two or more random variables and they help us to understand the relationship between the random variables. This all works fine if the random variables are independent. If they are not, we use covariance and correlation to get a picture of the relationship of the random variables. Covariance and Correlation will be the topic of the next Introduction blog-post.

### Discrete case

If the random variables are discrete, the joint distribution has a joint probability mass function (joint PMF). The joint PMF gives then the probability for the joint outcome: .

The joint PMF can be visualised with a joint probability table:

X/Y | ||||

**Example: **Suppose we roll two dice. Let X be the outcome of the first die and Y the outcome of the second die.

X/Y | 1 | 2 | 3 | 4 | 5 | 6 |

1 | ||||||

2 | ||||||

3 | ||||||

4 | ||||||

5 | ||||||

6 |

**Properties of joint PMFs:**

## Continuous case

One might expect that the continuous case is more difficult, but it isn’t. It is basically the same. We just change PMF to PDF, discrete sets to ranges and sums to integrals. The joint probability density function (joint PDF) f(x,y) gives then the probability density at (x, y).

The joint distribution of continuous random variables can be visualised by a big rectangle which contains all outcomes X and Y can take, and a small rectangle which is an event, like 2<X<3, 1<Y<2.

**Properties of joint PDFs:**

## Joint Cumulative Distribution Function (CDF)

The joint CDF is given by and calculated by either double sums or double integrals:

**Continuous case:**

**Discrete case:**

## Marginal Distribution

Sometimes we want to consider just one of the random variables. We then use the marginal distribution. So the PMF for X without Y is therefore the marginal PMF of X and often denoted . To get the marginal PMF we just sum up every row or every column. In the continuous case we take the integral of all except of one random variable over the complete ranges of the other random variables.

- if [c,d] is the range of Y
- if [a,b] is the range of X
- if d is the upper end of the range of Y
- if b is the upper end of the range of X

## Independence

As already mentioned in the introduction, we use covariance and correlation when the random variables are not independent. But how do we check if the random variables are independent? That’s surprisingly easy. Jointly distributed random variables are independent if: