Determining the characteristics of a random function from experience. Random Features

Subscribe
Join the “koon.ru” community!
In contact with:

A complex layered function is called a function

Z(t)=X(t)+Y(t)i,

Where X(t) And Y(t)-real random functions of a real argument t.

Let us generalize the definitions of mathematical expectation and variance to complex random functions so that, in particular, at Y = 0, these characteristics coincide with the previously introduced characteristics for real random functions, i.e., so that the requirements are met:

m z(t)=m x(t)(*)

D z(t)=D x(t)(**)

Mathematical,waiting,complex random function Z(t)=X(t)+Y(t)i called complex function(non-random)

m z ( t)=m x(t)+m y(t)i.

In particular, for Y=0 we get t z(t)=t x(t),those. requirement (*) is satisfied.

Dispersion of the complex random function Z(t) is the mathematical expectation of the squared modulus of a centered function Z(t):

D z(t)=M[| (t)| 2 ].

In particular, for Y==0 we obtain D z ( t)= M[| (t)|] 2 =D x(t), i.e. requirement (**) is satisfied.

Considering that the mathematical expectation of the sum is equal to the sum of the mathematical expectations of the terms, we have

D z(t)=M[| (t)| 2 ]=M{[ (t)] 2 + [ (t) 2 ]}=M[ (t)] 2 +M[ (t) 2 ]=D x(t)+D y(t).

So, the variance of a complex random function is equal to the sum of the variances of its real and imaginary parts:

D z ( t)=D x(t)+D y(t).

It is known that the correlation function of a real random function X(t) at different meanings arguments equal to variance D x(t). Let us generalize the definition of the correlation function to complex random functions Z(t) so that when equal values arguments t 1 =t 2 =t correlation function K z(t,t) was equal to the variance D z(t), i.e., so that the requirement is met

K z(t,t)=D z(t). (***)

The correlation function of the complex random function Z(t) is called the correlation moment of the cross sections ( t 1) and ( t 2)

K z(t 1 ,t 2)= M.

In particular, with equal values ​​of the arguments

K z(t,t)= M=M[| | 2 ]=D z(t).

i.e. requirement (***) is satisfied.

If real random functions X(t) And Y(t) are correlated, then

K z(t 1 ,t 2)= K x(t 1 ,t 2)+K y(t 1 ,t 2)+ [Rxy(t 2 ,t 1)]+ [Rxy(t 1 ,t 1)].

If X(t) And Y(t) are not correlated, then

K z(t 1 ,t 2)= K x(t 1 ,t 2)+K y(t 1 ,t 2).

Let us generalize the definition of the cross-correlation function to complex random functions Z 1 (t)=X 1 (t)+Y 1 (t)i And Z 2 (t)=X 2 (t)+Y 2 (t)i so that, in particular, when Y 1 =Y 2 = 0 requirement fulfilled

Cross correlation function of two complex random functions call a function (non-random)

In particular, when Y 1 =Y 2 =0 we get

i.e. requirement (****) is met.

The cross-correlation function of two complex random functions is expressed through the cross-correlation functions of their real and imaginary parts by the following formula:

Tasks

1. Find the mathematical expectation of random functions:

a) X(t)=Ut 2 where U-random value, and M(U)=5 ,

b)X(t)=U cos2 t+Vt, Where U And V- random variables, and M(U)=3 ,M(V)=4 .

Rep. a) m x (t)=5t 2 ; b) t x (t)=3 cos2t+4t.

2. K x(t 1 ,t 2) random function X(t). Find correlation functions of random functions:

a) Y(t)=X(t)+t; b) Y(t)=(t+1)X(t); V) Y(t)=4X(t).

Rep. a) K y (t 1 ,t 2) = K x (t 1 ,t 2); b) K y (t 1 ,t 2)=(t 1 +1)(t 2 +1) K x (t 1 ,t 2); c) K y (t 1 ,t 2)=16 K x (t 1 ,t 2)=.

3. Variance is specified D x(t) random function X(t). Find the variance of random functions: a) Y(t)=X(t)+e t b)Y(t)=tX(t).

Reply. a) Dy(t)=D x(t); b) Dy(t)=t 2 D x(t).

4. Find: a) mathematical expectation; b) correlation function; c) variance of a random function X(t)=Usin 2t, Where U- random variable, and M(U)=3 ,D(U)=6 .

Reply. A) m x(t) =3sin 2t; b) K x(t 1 ,t 2)= 6sin 2t 1 sin 2t 2 ; V) D x(t)=6sin 2 2t.

5. Find the normalized correlation function of the random function X(t), knowing its correlation function K x(t 1 ,t 2)=3cos(t 2 -t 1).

Rep. ρ x (t 1 ,t 2)=cos(t 2 -t 1).

6. Find: a) mutual correlation function; b) normalized cross-correlation function of two random functions X(t)=(t+1)U, and Y( t)= (t 2 + 1)U, Where U- random variable, and D(U)=7.

Reply. a) Rxy(t 1 ,t 2)=7(t 1 +l)( t 2 2 +l); b) ρ xy(t 1 ,t 2)=1.

7. Random functions are given X(t)= (t- 1)U And Y(t)=t 2 U, Where U And V- uncorrelated random variables, and M(U)=2, M(V)= 3,D(U)=4 , D(V)=5 . Find: a) mathematical expectation; b) correlation function; c) variance of the sum Z(t)=X(t)+Y(t).

Note. Make sure that the cross-correlation function of the given random functions is equal to zero and, therefore, X(t) And Y(t) are not correlated.

Reply. A) m z(t)=2(t- 1)+3t 2 ; b) K z(t 1 ,t 2)=4(t 1 - l)( t 2 - 1)+6t 1 2 t 2 2 ; V) D z(t)=4(t- 1) 2 +6t 4.

8. Mathematical expectation is given m x(t)=t 2 +1 random function X(t). Find the mathematical expectation of its derivative.

9. Mathematical expectation is given m x(t)=t 2 +3 random function X(t). Find the mathematical expectation of a random function Y(t)=tX"(t)+t 3.

Rep. m y (t)=t 2 (t+2).

10. The correlation function is given K x(t 1 ,t 2)= random function X(t). Find the correlation function of its derivative.

11. The correlation function is given K x(t 1 ,t 2)= random function X(t). Find cross correlation functions.

SEVASTOPOL STATE UNIVERSITY

MM. Ghashim, T.V. Cernautanu

RANDOM FEATURES

Tutorial

Approved

scientific council of the institute

Sevastopol


Ghashim M.M., T.V.Cerneutanu

Random functions: educational method. allowance. – Sevastopol: SevGU, 2015.

This manual covers three main sections: “ ”, “ ”, “ “. Each section includes basic theoretical questions, analysis typical examples, tasks for independent work with answers to them.

intended for third-year students when studying the topic "".

Reviewers:

Ph.D.,

Ph.D., Associate Professor

NK.Ph.S. Associate Professor

© SevGU Publishing House, 2015

§ 1. The concept of a random function……………………………………

§ 2. Characteristics of random functions……………………………

§ 3. Operator of a dynamic system……………………………….

§ 4. Linear transformations of random functions………………

§ 5. Stationary random processes ……………………

§ 6. Spectral expansion of a stationary random function………

§ 7. Ergodic property of stationary random functions………….

Solving typical problems………………………………………………………………..

Tasks for independent decision………………………………

LITERATURE………………………………………………………………

Random Features

The concept of a random function.

In the course of probability theory, the main subject of study was random variables, which were characterized by the fact that as a result of experiment they took on one, unknown in advance, but only one value. That is, random phenomena were studied as if in “statics”, in some fixed constant conditions of an individual experiment. However, in practice one often has to deal with random variables that continuously change during the experiment. For example, the lead angle when continuously aiming at a moving target; deviation of the trajectory of a guided projectile from the theoretical one during control or homing, etc. In principle, any automated control system requires certain requirements to the corresponding theoretical basis– theories automatic control. The development of this theory is impossible without analyzing the errors that inevitably accompany control processes, which always occur under conditions of continuously operating random disturbances or “interference.” These disturbances are random functions by their nature. So:



Definition . Random function X(t) is called a function of a non-random argument t, which for each fixed value of the argument is a random variable.

The specific form taken by a random function X(t) as a result of experience is called implementation random function.

Example . An airplane on an air course has a theoretically constant airspeed V. In fact, its speed fluctuates around this average nominal value and is a random function of time. Flight can be considered as an experiment in which a random function V(t) accepts a specific implementation (Fig. 1).


The type of implementation changes from experience to experience. If a recorder is installed on an airplane, then in each flight it will record a new, different from the others, implementation of a random function. As a result of several flights, one can obtain a family of implementations of the random function V(t) (Fig.2).

In practice, there are random functions that depend not on one argument, but on several, for example, the state of the atmosphere (temperature, pressure, wind, precipitation). In this course we will consider only random functions of one argument. Since this argument is most often time, we will denote it by the letter t. In addition, we agree to denote random functions in capital letters (X(t), Y(t), …) in contrast to non-random functions ( x(t),y(t), …).

Consider some random function X(t). Let us assume that it has been n independent experiments, as a result of which n implementations were obtained, which we denote according to the numbers of experiments x 1 (t), x 2 (t), …, x n(t). Obviously, each implementation is an ordinary (not random) function. Thus, as a result of each experiment, the random function X(t) turns into a non-random function.

Let us now fix some value of the argument t. In this case, the random function X(t) will turn into a random variable.

Definition. Section random function X(t) is a random variable corresponding to a fixed value of the argument of a random function.

We see that a random function combines the features of a random variable and a function. In the future we will often consider the same function alternately X(t) either as a random function or as a random variable, depending on whether it is considered over the entire range of change t or at its fixed value.

Consider the random variable X(t) – cross section of a random function at the moment t. This random variable obviously has a distribution law, which in general depends on t. Let's denote it f(x, t). Function f(x, t) is called one-dimensional distribution law random function X(t).

Obviously the function f(x, t) is not a complete, exhaustive characteristic of a random function X(t), because it characterizes only the distribution law X(t) for a given, albeit arbitrary t and does not answer the question about the dependence of random variables X(t) for different t. From this point of view, a more complete characterization of the random function X(t) is the so-called two-dimensional distribution law: f(x 1 , x 2 ; t 1 , t 2). This is the law of distribution of a system of two random variables X(t 1), X(t 2), i.e. two arbitrary sections of a random function X(t). But this characteristic is not exhaustive in the general case. Obviously, theoretically it is possible to increase the number of arguments unlimitedly and get more and more full description random function, but it is extremely difficult to operate with such cumbersome characteristics that depend on many arguments. Within this course We will not use distribution laws at all, but will limit ourselves to considering the simplest characteristics of random functions, similar to the numerical characteristics of random variables.

Assignment for coursework

Given: five initial moments

A 1 = 1, a 2= 2, a 3= 2, a 4= 1, a 5 = 1 G = 0, µ 0 = 1).

Find: five central points.

Having at your disposal five initial and five central moments, calculate the values:

A)expected value;

b)dispersion;

V)standard deviation;

G)the coefficient of variation;

d)asymmetry coefficient;

e)kurtosis coefficient.

Using the data obtained, qualitatively describe the probability density of this process.

1. Theoretical information

Distributions of random variables and distribution functions

The distribution of a numerical random variable is a function that uniquely determines the probability that the random variable takes a given value or belongs to some given interval.

The first is if the random variable takes a finite number of values. Then the distribution is given by the function P (X = x),giving everyone possible meaning Xrandom variable Xthe probability that X = x.

The second is if the random variable takes on infinitely many values. This is possible only when the probabilistic space on which the random variable is defined consists of an infinite number of elementary events. Then the distribution is given by a set of probabilities R (aX for all pairs of numbers a, b such that A The distribution can be specified using the so-called. distribution function F(x) = P (X<х), defining for all real X the probability that the random variable X takes values ​​less than X. It's clear that

R (aX

This relationship shows that both the distribution can be calculated from the distribution function, and, conversely, the distribution function can be calculated from the distribution.

Distribution functions used in probabilistic-statistical methods of decision-making and other applied research are either discrete, continuous, or combinations thereof.

Discrete distribution functions correspond to discrete random variables that take a finite number of values ​​or values ​​from a set whose elements can be numbered by natural numbers (such sets are called countable in mathematics). Their graph looks like a stepped ladder (Fig. 1).

Example 1.Number Xdefective items in a batch takes on a value of 0 with a probability of 0.3, a value of 1 with a probability of 0.4, a value of 2 with a probability of 0.2, and a value of 3 with a probability of 0.1. Graph of the distribution function of a random variable Xshown in Fig. 1.

Rice. 1. Graph of the distribution function of the number of defective products.

Continuous distribution functions do not have jumps. They increase monotonically as the argument increases - from 0 for x→∞ to 1 for x→+∞. Random variables that have continuous distribution functions are called continuous.

Continuous distribution functions used in probabilistic-statistical decision-making methods have derivatives. First derivative f(x)distribution functions F(x)is called the probability density,

Using the probability density, you can determine the distribution function:

For any distribution function

The listed properties of distribution functions are constantly used in probabilistic and statistical methods of decision making. In particular, the last equality implies a specific form of constants in the formulas for probability densities considered below.

Example 2.The following distribution function is often used:

(1)

Where AAnd b-some numbers A Let's find the probability density of this distribution function:

(at points X = AAnd x = bderivative of a function F(x)does not exist).

A random variable with distribution function (1) is called “uniformly distributed on the segment ».

Mixed distribution functions occur, in particular, when observations stop at some point. For example, when analyzing statistical data obtained by using reliability test plans that provide for termination of testing after a certain period. Or when analyzing data on technical products that required warranty repairs.

Example 3.Let, for example, the service life of an electric light bulb be a random variable with a distribution function F(t),and the test is carried out until the light bulb fails, if this occurs in less than 100 hours from the start of the test, or until t0 = 100 hours. Let G(t) -distribution function of the operating time of the light bulb in good condition during this test. Then

Function G(t)has a jump at a point t0 , since the corresponding random variable takes the value t0 with probability 1-F(t0 )>0.

Characteristics of random variables.In probabilistic-statistical methods of decision-making, a number of characteristics of random variables are used, expressed through distribution functions and probability densities.

When describing income differentiation, when finding confidence limits for the parameters of distributions of random variables, and in many other cases, such a concept as “order quantile” is used. R",where 0 <р < 1 (denoted XR). Order quantile R- the value of a random variable for which the distribution function takes the value Ror there is a “jump” from a value less Rto a value greater R(Fig. 2). It may happen that this condition is satisfied for all values ​​of x belonging to this interval (i.e. the distribution function is constant on this interval and is equal to R).Then each such value is called a “quantile of order” R".For continuous distribution functions, as a rule, there is a single quantile XR order R(Fig. 2), and

F(xp)=p.(2)

Rice. 2. Determination of quantile XR order R.

Example 4.Let's find the quantile XR order Rfor the distribution function F(x)from (1).

At 0 <р < 1 quantile XR is found from the equation

those. XR= a+ p (b - a) = a (1-p) + bр. At p = 0 any XA is a quantile of order p= 0. Order quantile R= 1 is any number Xb.

For discrete distributions, as a rule, there is no XR, satisfying equation (2). More precisely, if the distribution of a random variable is given in table. 1, where x1 < х 2 <… < х To, then equality (2), considered as an equation with respect to XR, has solutions only for kvalues R,namely,

p =p1

p =p1 +p2 ,

p = p1 +p2 +p3 ,

p = p1 +p2 + RT, 3<т<к,

p = p, + p2 +… +pk

Table 1. Distribution of a discrete random variable

Random variable x values Xx1 X2 XkProbabilities P (X = x)P1 R2 Rk

For those listed Toprobability values Rsolution XR equation (2) is not unique, namely,

F(x) =р, +р2 +… + RT

for all Xsuch that XT < х < х t+1. Those. XR - any number from the interval (XT; xm+1). For everyone else Rfrom the interval (0; 1), not included in the list (3), there is a “jump” from a value less Rto a value greater R.Namely, if

p1 +p2 +… + pT 1 +p2 + … + pT+pt+1,

That xR=xt+1.

The considered property of discrete distributions creates significant difficulties when tabulating and using such distributions, since it is impossible to accurately maintain typical numerical values ​​of the distribution characteristics. In particular, this is true for critical values ​​and significance levels of nonparametric statistical tests (see below), since the distributions of statistics of these tests are discrete.

Quantile order is of great importance in statistics p =½. It is called the median (random variable Xor its distribution function F(x))and is designated Fur).In geometry there is the concept of “median” - a straight line passing through the vertex of a triangle and dividing its opposite side in half. In mathematical statistics, the median divides in half not the side of the triangle, but the distribution of a random variable: equality F(x0,5 ) = 0.5 means that the probability of getting to the left x0,5 and the probability of getting to the right x0,5 (or directly x0,5 ) are equal to each other and equal ½ , those.

The median indicates the "center" of the distribution. From the point of view of one of the modern concepts - the theory of stable statistical procedures - the median is a better characteristic of a random variable than the mathematical expectation. When processing measurement results on an ordinal scale (see the chapter on measurement theory), the median can be used, but the mathematical expectation cannot.

A characteristic of a random variable such as mode has a clear meaning - the value (or values) of a random variable corresponding to the local maximum of the probability density for a continuous random variable or the local maximum of the probability for a discrete random variable.

If X0 - mode of random variable with density f(x),as is known

from differential calculus,

A random variable can have many modes. So, for uniform distribution (1) each point Xsuch that A< х < b, is fashion. However, this is an exception. Most random variables used in probabilistic statistical methods of decision making and other applied research have one mode. Random variables, densities, distributions that have one mode are called unimodal.

The mathematical expectation for discrete random variables with a finite number of values ​​is discussed in the chapter “Events and Probabilities”. For a continuous random variable Xexpected value M(X)satisfies the equality

Example 5.Expectation for a uniformly distributed random variable Xequals

For the random variables considered in this chapter, all those properties of mathematical expectations and variances that were considered earlier for discrete random variables with a finite number of values ​​are true. However, we do not provide proof of these properties, since they require deepening into mathematical subtleties, which is not necessary for understanding and qualified application of probabilistic-statistical methods of decision-making.

Comment. This textbook consciously avoids mathematical subtleties associated, in particular, with the concepts of measurable sets and measurable functions, algebra of events, etc. Those wishing to master these concepts should turn to specialized literature, in particular, the encyclopedia.

Each of the three characteristics - mathematical expectation, median, mode - describes the “center” of the probability distribution. The concept of "center" can be defined in different ways - hence three different characteristics. However, for an important class of distributions - symmetric unimodal - all three characteristics coincide.

Distribution density f(x)- density of symmetric distribution, if there is a number X0 such that

(3)

Equality (3) means that the graph of the function y =f(x)symmetrical about a vertical line passing through the center of symmetry x = x0 . From (3) it follows that the symmetric distribution function satisfies the relation

(4)

For a symmetric distribution with one mode, the mathematical expectation, median and mode coincide and are equal X0 .

The most important case is symmetry about 0, i.e. XP = 0. Then (3) and (4) become equalities

(5)

(6)

respectively. The above relations show that there is no need to tabulate symmetric distributions for all X, it is enough to have tables for x X0 .

Let us note one more property of symmetric distributions, which is constantly used in probabilistic-statistical methods of decision-making and other applied research. For a continuous distribution function

R(a) = P (-aa) = F(a) - F(-a),

Where F- distribution function of a random variable X.If the distribution function Fis symmetrical about 0, i.e. formula (6) is valid for it, then

R(a) =2F(a) - 1.

Another formulation of the statement in question is often used: if

If And - quantiles of order α and 1- α accordingly (see (2)) a distribution function symmetric about 0, then from (6) it follows that

From the characteristics of the position - mathematical expectation, median, mode - let's move on to the characteristics of the spread of the random variable X:

variances , standard deviation σ and coefficient of variation v. The definition and properties of dispersion for discrete random variables were discussed in the previous chapter. For continuous random variables

The standard deviation is the non-negative value of the square root of the variance:

The coefficient of variation is the ratio of the standard deviation to the mathematical expectation:

The coefficient of variation is applied when M(X)>0.It measures the spread in relative units, while the standard deviation is in absolute units.

Example 6.For a uniformly distributed random variable XLet's find the dispersion, standard deviation and coefficient of variation. The variance is:

Variable replacement makes it possible to write:

Where c = (b- A)/2. Therefore, the standard deviation is equal to , and the coefficient of variation is:

For each random variable Xdetermine three more quantities - centered Y,normalized Vand given U.Centered random variable Y-is the difference between a given random variable Xand its mathematical expectation M(X),those. Y= X - M(X).The mathematical expectation of the centered random variable Г is equal to 0, and the variance is the dispersion of this random variable: M(Y) =0, D(Y) = D(X).Distribution function FY(x)centered random variable Yrelated to the distribution function F(x)original random variable Xratio:

FY(x) =F (x + M(X)).

The densities of these random variables satisfy the equality

fY(x) =f (x + M(X)).

Normalized random variable Vis the ratio of a given random variable XTo its standard deviation σ , i.e. . Expectation and variance of a normalized random variable Vexpressed through characteristics XSo:

Where v- coefficient of variation of the original random variable X.For the distribution function Fv(x)and density fv(x)normalized random variable Vwe have:

Where F(x)- distribution function of the original random variable X,a f(x) -its probability density.

Reduced random variable U-this is the centered and normalized random variable:

For the given random variable:

(7)

Normalized, centered and reduced random variables are constantly used both in theoretical studies and in algorithms, software products, regulatory, technical and instructional documentation. In particular, because make it possible to simplify the justification of methods, the formulation of theorems and calculation formulas.

Transformations of random variables and more general ones are used. So, if Y= aX+ b,Where Aand b - some numbers, then

(8)

Example 7.If That U -given random variable, and formulas (8) turn into formulas (7).

With each random variable Xyou can associate many random variables Y,given by the formula U= aX+bat different a>0And b.This set is called scale-shift family,generated by a random variable X.Distribution functions FY(x)constitute a scale-shift family of distributions generated by the distribution function F(x).Instead of Y= aX+ boften use recording

(9)

Number Withis called the shift parameter, and the number d- scale parameter. Formula (9) shows that X -the result of measuring a certain quantity - goes into Y - the result of measuring the same quantity if the beginning of the measurement is moved to a point With,and then use the new unit of measurement, in dtimes larger than the old one.

For the scale-shift family (9), the distribution of X is called standard. In probabilistic statistical methods of decision making and other applied research, the standard normal distribution, standard Weibull-Gnedenko distribution, standard gamma distribution, etc. are used (see below).

Other transformations of random variables are also used. For example, for a positive random variable Xare considering Y=g X,where lg X-decimal logarithm of a number X.Chain of equalities

100 RUR bonus for first order

Select the type of work Diploma work Course work Abstract Master's thesis Practice report Article Report Review Test work Monograph Problem solving Business plan Answers to questions Creative work Essay Drawing Essays Translation Presentations Typing Other Increasing the uniqueness of the text Master's thesis Laboratory work On-line help

Find out the price

Random function - a function that, as a result of experience, can take on one or another specific form unknown in advance. Usually the argument of a random function (r.f.) is time, then the r.f. called random process(s.p.).

S.f. continuously changing argument t such a r.v. is called, the distribution of which depends not only on the argument t=t1, but also on what particular values ​​this quantity took for other values ​​of this argument t=t 2. These r.v. are correlated with each other and the closer the values ​​of the arguments are, the closer they are to each other. In the limit, when the interval between two values ​​of the argument tends to zero, the correlation coefficient is equal to one:

those. t 1 and t1+Dt1 at Dt1®0 are related by a linear relationship.

S.f. takes, as a result of one experiment, an innumerable (in general, uncountable) set of values ​​- one for each value of the argument or for each set of values ​​of the arguments. This function has one very specific value for each moment in time. The result of measuring a continuously changing quantity is such a r.v., which in each given experiment represents a certain function of time.

S.f. can also be considered as an infinite set of r.v., depending on one or several continuously changing parameters t. Each given parameter value t corresponds to one s. in Xt. Together all s.v. X t determine s.f. X(t). These r.v. are correlated with each other and the stronger the closer they are to each other.

Elementary s.f. – this is the product of an ordinary r.v. X to some non-random function j(t): X(t)=X×j(t), i.e. such a s.f., in which it is not the form that is random, but only its scale.

S.f. - has a m.o. equal to zero. p– r.v. distribution density X(s.f. values X(t)), taken at an arbitrary value t 1 argument t.

Implementation of s.f. X(t)– described by the equation x=f1(t) at t=t1 and the equation x=f2(t) at t=t2.

Generally functions x=f1(t) And x=f2(t)– various functions. But these functions are identical and linear, the more ( t1®t2) t 1 is closer to t 2.

One-dimensional probability density s.f. p(x,t)- depends on X and from the parameter t. Two-dimensional probability density p(x1,x2;t1,t2)– joint law of distribution of values X(t1) and X(t2) With. f. X(t) for two arbitrary values t And t¢ argument t.

. (66.5)

In general, the function X(t) characterized by a large number n-dimensional laws of distribution .

M.o. s.f. X(t)- non-random function, which for each argument value t equal to m.o. ordinates s.f. with this argument t.

- function depending on x And t.

Likewise, dispersion is a non-random function.

Degree of dependence r.v. for different values ​​of the argument is characterized by an autocorrelation function.

Autocorrelation function s.f. X(t) Kx(ti,tj), which for each pair of values ti, tj equal to the correlation moment of the corresponding ordinates of the s.f. (at i=j the correlation function (c.f.) turns into the dispersion of the c.f.);

where is the joint distribution density of two r.v. (s.f. values) taken at two arbitrary values t 1 and t 2 arguments t. At t1=t2=t we get the variance D(t).

Autocorrelation function - a set of m.o. products of deviations of two ordinates s.f. , taken with the arguments t1 And t 2, from the ordinates of the non-random function m.o. , taken with the same arguments.

The autocorrelation function characterizes the degree of variability of the s.f. when the argument changes. In Fig. it is clear that the dependence between the values ​​of the s.f. corresponding to two given values ​​of the argument t- weaker in the first case.

Rice. Correlation related random functions

If two s.f. X(t) And Y(t), forming the system are not independent, then their mutual correlation function is identically non-zero:

where is the joint distribution density of two r.v. (values ​​of two s.f. X(t) And Y(t)), taken with two arbitrary arguments ( t 1 - function argument X(t), t 2 - function argument Y(t)).

If X(t) and Y(t) are independent, then K XY( t1,t2)=0. System of n s.f. X 1(t), X2(t),...,Xn(t) characterized n m.o. , n autocorrelation functions and more n(n-1)/2 correlation functions.

Mutual correlation function (characterizes the relationship between two s.f., i.e. stochastic dependence) of two s.f. X(t) And Y(t)- non-random function of two arguments t i and t j, which for each pair of values t i, t j is equal to the correlation moment of the corresponding sections of the s.f. It establishes a connection between two values ​​of two functions (values ​​- r.v.), with two arguments t 1 and t 2.

Of particular importance are stationary random functions , the probabilistic characteristics of which do not change with any shift in the argument. M.o. stationary s.f. is constant (i.e. is not a function), and the correlation function depends only on the difference in the values ​​of the arguments t i and t j.

This is an even function (symmetrically OY).

For a large time interval t=t2-t1 deviation of ordinate s.f. from her m.o. at a point in time t 2 becomes practically independent of the value of this deviation at the moment of time t 1. In this case the function KX(t), giving the value of the correlation moment between X(t1) And X(t2), at ½ t½®¥ tends to zero.

Many stationary s.f. have ergodic property, which is that with an unlimitedly increasing observation interval, the average observed value of the stationary s.f. with probability equal to 1, will indefinitely approach its m.o. Observation of stationary s.f. at different values ​​of t over a sufficiently large interval in one experiment is equivalent to observing its values ​​at the same value t in a number of experiments.

Sometimes it is necessary to determine the characteristics of the transformed s.f. according to the characteristics of the initial s.f. So if

(70.5),

That those. m.o. integral (derivative) of s.f. equal to the integral (derivative) of m.o. ( y(t)- rate of change of s.f. X(t), - rate of change of m.o.).

When integrating or differentiating s.f. we also get s.f. If X(t) is normally distributed, then Z(t) And Y(t) are also normally distributed. If X(t)– stationary s.f., then Z(t) no longer stationary s.f., because depends on t.

Examples of correlation functions.

1) (from (2) at b®0); 2) ;

3) ; 4) ;

5)(from (3) with b®0); 6) (from (4) with b®0).

On the charts a= 1, b= 5, s= 1.

a- characterizes the rate of decrease in the correlation between the ordinates of the s.f. with increasing difference between the arguments of these ordinates t.

a/b- characterizes the “degree of irregularity of the process.” At low a/b the ordinates of the process turn out to be highly correlated and the implementation of the process is similar to a sinusoid; at large a/b (71.5).

Formula (71) for a stationary function takes the form:

Correlation function s.f. and its derivative . For a differentiable stationary process, the ordinate s.f. and its derivative taken at the same moment in time are uncorrelated r.v. (and for a normal process, independent).

When multiplying s.f. to the deterministic one we obtain s.f. Z(t)=a(t)X(t), whose correlation function is equal to

KZ(t1,t2)=a(t1)a(t2) KX(t1,t2) (72.5),

Where a(t)- deterministic function.

The sum of two s.f. is also s.f. Z(t)=X(t)+Y(t) and its correlation function in the presence of a correlation between X(t) and Y(t):

KZ(t1,t2)=KX(t1,t2)+ KY(t1,t2)+ 2KXY(t1,t2),(73.5)

Where KXY(t1,t2)- see (68.5) - mutual correlation function of two dependent s.f. X(t) And Y(t).

If X(t) And Y(t) are independent, then KXY(t1,t2)=0. M.o. s.f. Z(t): .

Laboratory work No. 4

RANDOM PROCESSES
AND THEIR CHARACTERISTICS

4.1. GOAL OF THE WORK

Introduction to the basic concepts of the theory of random processes. Performing measurements of moment characteristics and estimating PDFs of instantaneous values ​​of random processes. Analysis of the type of autocorrelation function (ACF) and power spectral density (PSD) of a random process. Study of transformations of a random process by linear stationary and nonlinear inertia-free chains.

4.2. THEORETICAL INFORMATION

Random events and random variables
An event that may or may not happen in some experience is called random event And characterized probability implementation
. Random value(NE)
can take on one meaning in experience from some set
; this value is called the realization of this SV. can be, for example, a set of real numbers or a subset thereof. If the set is finite or countable (discrete SV), we can talk about probability
the implementation of an event, which consists in the random variable accepting the value , i.e., on the set of values ​​of the discrete random variable is specified probability distribution. If the set is uncountable (for example, the entire real line), then a complete description of the random variable is given by distribution function, defined by the expression

,

Where
. If the distribution function is continuous and differentiable, then we can define probability density function(PDF), also called probability density for brevity
(and sometimes just density):

, wherein
.

Obviously, the distribution function is a non-negative non-decreasing function with the properties
,
. Hence,
PDF is a non-negative function that satisfies normalization condition
.

Sometimes they are limited to the numerical characteristics of a random variable, most often moments. Elementary moment -th order (th initial moment)

,

where is the horizontal line and
– symbolic notation of the integral operator ensemble averaging. First starting moment
, called mathematical expectation or distribution center.

Central moment of the th order (th central moment)

The most commonly used central moment is the second central moment, or dispersion

Instead of dispersion, they often operate standard deviation(RMS) of a random variable
.

^ Middle Square, or second initial moment
, is related to dispersion and mathematical expectation:

To describe the form of the PDF, the coefficient is used asymmetry
and coefficient excess
(sometimes kurtosis is characterized by the value
).

The normal or Gaussian (Gaussian) distribution with PDF is often used

,

Where And – distribution parameters (mathematical expectation and standard deviation, respectively). For a Gaussian distribution
,
.

Two random variables and are characterized joint distribution density
. The numerical characteristics of the joint density are the initial and central mixed moments

,
,

where and – arbitrary positive integers;
And – mathematical expectations of SV x And y.

The most commonly used mixed moments of the second order are the initial ( correlational moment):

and central ( covariance moment, or covariance)

.

For a pair of Gaussian random variables, the two-dimensional joint PDF has the form

Where , – standard deviations;
– mathematical expectations; correlation coefficient– normalized covariance moment

.

With a zero correlation coefficient, it is obvious that

,

i.e. uncorrelated Gaussian random variables independent.
^

Random processes

A random process is a sequence of random variables ordered in increasing order of some variable (usually time). You can move from a description of a random variable to a description of a random process by considering the joint distributions of two, three or more process values ​​at some different points in time. In particular, considering the process in time sections(at
), we obtain -dimensional joint distribution function and probability density function of random variables

, defined by the expression

.

The random process is considered completely defined, if for anyone one can write down its joint PDF at any choice of time points
.

Often, when describing a random process, one can limit ourselves to the set of its mixed initial moments (if they exist, i.e., the corresponding integrals converge)

and mixed central moments

for non-negative integers
and in general.

In the general case, the moments of the joint PDF depend on the location of the sections on the time axis and are called moment functions. The second mixed central moment is most often used.

,

called autocorrelation function or autocorrelation function (ACF). Let us recall that here and below the dependence on time is not explicitly indicated, namely, the functions of time are
,
And
.

Two random processes can be considered together
And
; such consideration presupposes their description in the form of a joint multidimensional PDF, as well as in the form of a set of all moments, including mixed ones. Most often, the second mixed central moment is used.

,

called cross-correlation function
.

Among all random processes, SPs are distinguished for which the joint -dimensional PDF does not change when all time sections simultaneously change (shift) by the same amount. Such processes are called stationary in the narrow sense or strictly stationary.

More often, a broader class of random processes with weakened stationarity properties is considered. The joint venture is called stationary in the broad sense, if with a simultaneous shift of sections only its moments do not change no higher than second order. In practice, this means that the SP is stationary in the broad sense if it has constant average(mathematical expectation) and dispersion
, and the ACF depends only on the difference between the moments of time, but not on their positions on the time axis:

1)
,

2) ,
.

notice, that
, from which the constancy of the dispersion follows.

It is not difficult to verify that a process that is stationary in the narrow sense is also stationary in the broad sense. The converse statement is generally false, although there are processes for which stationarity in the broad sense implies stationarity in the narrow sense.

Joint -dimensional PDF of readings
Gaussian process, taken in time sections, has the form

, (4.1)

Where – determinant of a square matrix composed of pairwise correlation coefficients of samples;
– algebraic complement of an element this matrix.

The joint Gaussian PDF for any case is completely determined by mathematical expectations, dispersions and correlation coefficients of samples, i.e., moment functions no higher than second order. If the Gaussian process is stationary in the broad sense, then all mathematical expectations are the same, all variances (and therefore the standard deviation) are equal to each other, and the correlation coefficients are determined only by how far the time sections are separated from each other. Then, obviously, PDF (4.1) will not change if all time sections are shifted to the left or to the right by the same amount. It follows that Gaussian process, stationary in the broad sense, stationary in the narrow sense(strictly stationary).

Among strictly stationary random processes, a narrower class is often distinguished ergodic random processes. For ergodic processes, the moments found by averaging over the ensemble are equal to the corresponding moments found by averaging over time:

,

(Here – symbolic designation of the time averaging operator).

In particular, for an ergodic process the mathematical expectation, variance and ACF are equal, respectively

,

,

Ergodicity is highly desirable, since it makes it possible to practically measure (evaluate) the numerical characteristics of a random process. The fact is that usually only one (albeit possibly quite long) implementation of a random process is available to an observer. Ergodicity means, essentially, that this unique realization is full representative of the entire ensemble.

Measuring the characteristics of the ergodic process can be done using simple measuring devices; so, if the process is a time-dependent voltage, then the voltmeter magnetoelectric the system measures its mathematical expectation (constant component), a voltmeter of an electromagnetic or thermoelectric system connected through a separating capacitance (to exclude the constant component) - its root mean square value (RMS). The device, the block diagram of which is shown in Fig. 4.1, allows you to measure the values ​​of the autocorrelation function for different . The low-pass filter plays the role of an integrator here, the capacitor centralizes the process, since it does not pass the direct current component. This device is called correlometer.


Rice. 4.1

Sufficient conditions for the ergodicity of a stationary random process are the condition
, and also less strong Slutsky condition
.
^

Discrete algorithms for estimating SP parameters

The above expressions for finding estimates of the parameters of the SP and the correlation function are valid for continuous time. In this laboratory work (as in many modern technical systems and instruments), analog signals are generated and processed by digital devices, which leads to the need for some modification of the corresponding expressions. In particular, to determine the estimate of the mathematical expectation, the expression is used sample mean

,

Where
– sequence of process samples ( sample volume
). The dispersion estimate is sample variance, defined by the expression

.

Estimation of the autocorrelation function, otherwise called correlogram, is found as

.

An estimate of the probability distribution density of the instantaneous value of the SSP is bar chart. To find it, the range of possible SP values ​​is divided into intervals of equal width, then for each -th interval the number of samples of the sample included in it. A histogram is a set of numbers
, usually depicted as a trellis diagram. The number of intervals for a given sample size is selected based on a compromise between estimation accuracy and resolution (degree of detail) of the histogram.
^

Correlation-spectral theory of random processes

If we are interested only in the moment characteristics of the first and second order, which determine the property of stationarity in the broad sense, then the description of the stationary SP is carried out at the level of the autocorrelation function
and power spectral density
, connected by a pair of Fourier transforms ( Wiener–Khinchin theorem):

,
.

Obviously, SPM - non-negative function. If the process has a non-zero mathematical expectation, then the term is added to the PSD
.

For a real process, ACF and SPM are even real functions.

Sometimes you can limit yourself to numerical characteristics - the correlation interval and the effective spectrum width. ^ Correlation interval are defined in different ways, in particular, the following definitions are known:

Return

×
Join the “koon.ru” community!
In contact with:
I am already subscribed to the community “koon.ru”