Rotating amplitude vector method. Harmonic oscillations of the value s are described by an equation of the type

Vector diagram. Addition of vibrations.

The solution of a number of problems in the theory of oscillations is greatly facilitated and becomes more obvious if the oscillations are depicted graphically using the method vector diagrams. Let's choose some axis X. From a point 0 on the axis we plot the length vector , which first forms an angle with the axis (Fig. 2.14.1). If we bring this vector into rotation with an angular velocity , then the projection of the end of the vector onto the axis X will change over time according to the law

.

Therefore, the projection of the end of the vector onto the axis will perform a harmonic oscillation with an amplitude equal to the length of the vector, with a circular frequency equal to the angular velocity of rotation of the vector, and with an initial phase equal to the angle that the vector forms with the axis at the initial moment of time. The angle formed by the vector with the axis at a given moment of time determines the phase of the oscillation at that moment - .

From what has been said, it follows that a harmonic oscillation can be represented using a vector, the length of which is equal to the amplitude of the oscillation, and its direction forms an angle with a certain axis equal to the phase of the oscillation. This is the essence of the method of vector diagrams.

Addition of oscillations of the same direction.

Consider the addition of two harmonic oscillations, the directions of which are parallel:

. (2.14.1)

Resulting offset X will be the sum of and . It will be an oscillation with amplitude .

Let's use the method of vector diagrams (Fig. 2.14.2). in the figure, and are the phases of the resulting and added oscillations, respectively. It is easy to see what can be found by adding the vectors and . However, if the frequencies of the added oscillations are different, then the resulting amplitude changes in magnitude over time and the vector rotates at a non-constant speed, i.e. the oscillation will not be harmonic, but will represent some complex oscillatory process. In order for the resulting oscillation to be harmonic, the frequencies of the added oscillations must be the same

and the resulting oscillation occurs at the same frequency

.

It is clear from the construction that

Let us analyze the expression (2.14.2) for the amplitude of the resulting oscillation. If a the phase difference of the added oscillations is equal to zero(oscillations are in-phase), the amplitude is equal to the sum of the amplitudes of the added oscillations, i.e. has the maximum possible value . If a the phase difference is(oscillations are in antiphase), then the resulting amplitude is equal to the amplitude difference, i.e. has the smallest possible value .

Addition of mutually perpendicular vibrations.

Let the particle perform two harmonic oscillations with the same frequency: one along the direction, which we denote X, the other is in the perpendicular direction y. In this case, the particle will move along some, in the general case, a curvilinear trajectory, the shape of which depends on the phase difference of the oscillations.

We choose the origin of the time reference so that the initial phase of one oscillation is equal to zero:

. (2.14.3)

To obtain the particle trajectory equation, it is necessary to exclude from (2.14.3) t. From the first equation, a. means, . Let's rewrite the second equation

or

.

Transferring the first term from the right side of the equation to the left side, squaring the resulting equation and performing transformations, we obtain

. (2.14.4)

This equation is the equation of an ellipse whose axes are rotated relative to the axes X and y to some angle. But in some special cases simpler results are obtained.

1. The phase difference is zero. Then from (2.14.4) we get

or . (2.14.5)

This is the equation of a straight line (Fig. 2.14.3). Thus, the particle oscillates along this straight line with a frequency and amplitude equal to .

A vector diagram is a way to graphically define an oscillatory motion as a vector.

An oscillating value ξ (of any physical nature) is plotted along the horizontal axis. The vector plotted from the point 0 is equal in absolute value to the oscillation amplitude A and is directed at an angle α , equal to the initial phase of the oscillation, to the axis ξ. If we bring this vector into rotation with an angular velocity ω equal to the cyclic frequency of oscillations, then the projection of this vector onto the ξ axis gives the value of the oscillating quantity at an arbitrary moment in time.

Addition of oscillations of the same frequency and the same direction

Let there be two vibrations: we build vector diagrams and add vectors:

According to the law of cosines

As then

It is obvious (see the diagram) that the initial phase of the resulting oscillation is determined by the relation:

Addition of oscillations of close frequencies

P est, two oscillations with almost identical frequencies are added, i.e.

From trigonometry:

Applying to our case, we get:

The graph of the resulting oscillation is a beat graph, i.e. almost harmonic oscillations of the frequency ω, the amplitude of which slowly changes with the frequency Δω .

Amplitude due to the presence of the sign of the modulus (the amplitude is always > 0), the frequency with which the amplitude changes is not equal to Δω / 2, but twice as high - Δω.

Addition of mutually perpendicular oscillations

Let a small body oscillate on mutually perpendicular springs of the same stiffness. On what trajectory will this body move?

These are the trajectory equations in parametric form. To obtain an explicit relationship between the x and y coordinates, the parameter t must be excluded from the equations.

From the first equation: ,

From the second

After substitution

Let's get rid of the root:

is the equation of an ellipse

H
special cases:

27. Damped vibrations. Forced vibrations. Resonance.

Damping of free oscillations

Due to resistance, free oscillations always die out sooner or later. Let us consider the process of oscillation damping. Let us assume that the resistance force is proportional to the speed of the body. (the proportionality factor is indicated by 2mg for reasons of convenience, which will be revealed later). Let us keep in mind the case when its damping is small over the period of oscillation. Then we can assume that damping will have little effect on the frequency, but it will affect the amplitude of the oscillations. Then the equation of damped oscillations can be represented as Here A(t) represents some decreasing function that needs to be determined. We will proceed from the law of conservation and transformation of energy. The change in the energy of oscillations is equal to the average work of the resistance force over the period, i.e. We divide both sides of the equation by dt. On the right we will have dx/dt, i.e. speed v, and on the left you get the derivative of energy with respect to time. Therefore, taking into account But the average kinetic energy equal to half of the total energy. Therefore, it can be written that divide both its parts by E and multiply by dt. We get that We integrate both parts of the resulting equation: After potentiation, we get The integration constant C is found from the initial conditions. Let at t = 0 E = E0, then E0 = C. Therefore, But E~A^2. Therefore, the amplitude of damped oscillations also decreases according to the exponential law:

And so, due to the resistance, the amplitude of the oscillations decreases and they generally look as shown in Fig. 4.2. The coefficient is called the attenuation coefficient. However, it does not quite characterize the attenuation. Usually, the damping of oscillations is characterized by the damping decrement. The latter shows how many times the oscillation amplitude decreases over a time equal to the oscillation period. That is, the damping factor is defined as follows: The logarithm of the damping decrement is called the logarithmic decrement, it is obviously equal to

Forced vibrations

If the oscillatory system is subjected to the action of an external periodic force, then the so-called forced oscillations arise, which have an undamped character. Forced oscillations should be distinguished from self-oscillations. In the case of self-oscillations in the system, a special mechanism is assumed, which, in time with its own oscillations, "delivers" small portions of energy from some energy reservoir to the system. Thus, natural oscillations are maintained, which do not decay. In the case of self-oscillations, the system, as it were, pushes itself. Clocks can serve as an example of a self-oscillatory system. The clock is equipped with a ratchet mechanism, with the help of which the pendulum receives small shocks (from a compressed spring) in time with its own oscillations. In the case of forced oscillations, the system is pushed by an external force. Below we dwell on this case, assuming that the resistance in the system is small and can be neglected. As a model of forced oscillations, we will mean the same body suspended on a spring, which is affected by an external periodic force (for example, a force that has an electromagnetic nature). Without taking into account the resistance, the equation of motion of such a body in the projection on the x-axis has the form: where w* is the cyclic frequency, B is the amplitude of the external force. It is known that fluctuations exist. Therefore, we will look for a particular solution of the equation in the form of a sinusoidal function We substitute the function into the equation, for which we differentiate twice with respect to time . The substitution leads to the relation

The equation turns into an identity if three conditions are met: . Then and the equation of forced oscillations can be represented as They occur with a frequency coinciding with the frequency of the external force, and their amplitude is not set arbitrarily, as in the case of free vibrations, but is set by itself. This established value depends on the ratio of the natural oscillation frequency of the system and the frequency of the external force according to the formula

H and fig. 4.3 shows a plot of the dependence of the amplitude of forced oscillations on the frequency of the external force. It can be seen that the amplitude of oscillations increases significantly as the frequency of the external force approaches the frequency of natural oscillations. The phenomenon of a sharp increase in the amplitude of forced oscillations when the natural frequency and the frequency of the external force coincide is called resonance.

At resonance, the oscillation amplitude must be infinitely large. In reality, at resonance, the amplitude of forced oscillations is always finite. This is explained by the fact that at resonance and near it, our assumption of a negligibly small resistance becomes incorrect. Even if the resistance in the system is small, then it is significant in resonance. Its presence makes the oscillation amplitude in resonance a finite value. Thus, the actual graph of the dependence of the oscillation amplitude on the frequency has the form shown in Fig. 4.4. The greater the resistance in the system, the lower the maximum amplitude at the resonance point.

As a rule, resonance in mechanical systems is an undesirable phenomenon, and its they try to avoid: they try to design mechanical structures subject to oscillations and vibrations in such a way that the natural frequency of oscillations is far from the possible values ​​of the frequencies of external influences. But in a number of devices resonance is used as a positive phenomenon. For example, the resonance of electromagnetic oscillations is widely used in radio communications, the resonance of g-rays - in precision devices.

    The state of the thermodynamic system. Processes

Thermodynamic states and thermodynamic processes

When, in addition to the laws of mechanics, the application of the laws of thermodynamics is required, the system is called a thermodynamic system. The need to use this concept arises if the number of elements of the system (for example, the number of gas molecules) is very large, and the movement of its individual elements is microscopic in comparison with the movement of the system itself or its macroscopic components. In this case, thermodynamics describes macroscopic movements (changes in macroscopic states) of a thermodynamic system.

The parameters describing such movement (changes) of a thermodynamic system are usually divided into external and internal. This division is very conditional and depends on the specific task. So, for example, gas in a balloon with an elastic shell has the pressure of the surrounding air as an external parameter, and for a gas in a vessel with a rigid shell, the external parameter is the volume bounded by this shell. In a thermodynamic system, volume and pressure can vary independently of each other. For a theoretical description of their change, it is necessary to introduce at least one more parameter - temperature.

In most thermodynamic problems, three parameters are sufficient to describe the state of a thermodynamic system. In this case, changes in the system are described using three thermodynamic coordinates associated with the corresponding thermodynamic parameters.

equilibrium state- a state of thermodynamic equilibrium - such a state of a thermodynamic system is called, in which there are no flows (energy, matter, momentum, etc.), and the macroscopic parameters of the system are steady and do not change in time.

Classical thermodynamics states that an isolated thermodynamic system (left to itself) tends to a state of thermodynamic equilibrium and, after reaching it, cannot spontaneously leave it. This statement is often called zero law of thermodynamics.

Systems in a state of thermodynamic equilibrium have the following properties mi:

If two thermodynamic systems that have thermal contact are in a state of thermodynamic equilibrium, then the total thermodynamic system is also in a state of thermodynamic equilibrium.

If any thermodynamic system is in thermodynamic equilibrium with two other systems, then these two systems are in thermodynamic equilibrium with each other.

Let us consider thermodynamic systems that are in a state of thermodynamic equilibrium. The description of systems that are in a non-equilibrium state, that is, in a state where macroscopic flows take place, is dealt with by non-equilibrium thermodynamics. The transition from one thermodynamic state to another is called thermodynamic process. Below we will consider only quasi-static processes or, what is the same, quasi-equilibrium processes. The limiting case of a quasi-equilibrium process is an infinitely slow equilibrium process that consists of continuously successive states of thermodynamic equilibrium. In reality, such a process cannot take place, however, if macroscopic changes in the system occur rather slowly (over time intervals significantly exceeding the time for establishing thermodynamic equilibrium), it becomes possible to approximate the real process as quasi-static (quasi-equilibrium). Such an approximation makes it possible to carry out calculations with sufficiently high accuracy for a large class of practical problems. The equilibrium process is reversible, that is, one in which a return to the values ​​of the state parameters that took place at the previous moment of time should bring the thermodynamic system to the previous state without any changes in the bodies surrounding the system.

The practical application of quasi-equilibrium processes in any technical devices is ineffective. Thus, the use of a quasi-equilibrium process in a heat engine, for example, one that occurs at a practically constant temperature (see the description of the Carnot cycle in the third chapter), inevitably leads to the fact that such a machine will work very slowly (in the limit - infinitely slowly) and have a very small power. Therefore, in practice, quasi-equilibrium processes in technical devices are not used. Nevertheless, since the predictions of equilibrium thermodynamics for real systems coincide with a sufficiently high accuracy with experimental data for such systems, it is widely used to calculate thermodynamic processes in various technical devices.

If, during a thermodynamic process, the system returns to its original state, then such a process is called circular or cyclic. Circular processes, as well as any other thermodynamic processes, can be both equilibrium (and therefore reversible) and non-equilibrium (irreversible). In a reversible circular process, after the thermodynamic system returns to its original state, no thermodynamic perturbations arise in the bodies surrounding it, and their states remain in equilibrium. In this case, the external parameters of the system, after the implementation of the cyclic process, return to their original values. In an irreversible circular process, after its completion, the surrounding bodies pass into non-equilibrium states and the external parameters of the thermodynamic system change.

Complex amplitude method

The position of a point on a plane can be uniquely specified by a complex number:

If the point ($A$) rotates, then the coordinates of this point change in accordance with the law:

write $z$ in the form:

where $Re(z)=x$, that is, the physical quantity x is equal to the real part of the complex expression (4). In this case, the modulus of the complex expression is equal to the oscillation amplitude -- $a$, its argument is equal to the phase ($(\omega )_0t+\delta $). Sometimes, when taking the real part of $z$, the sign of the operation Re is omitted and a symbolic expression is obtained:

Expression (5) should not be taken literally. Often formally simplify (5):

where $A=ae^(i \delta)$ is the complex oscillation amplitude. The complex nature of the $A$ amplitude means that the oscillation has an initial phase that is not equal to zero.

In order to reveal the physical meaning of an expression like (6), we assume that the oscillation frequency ($(\omega )_0$) has real and imaginary parts, and it can be represented as:

Then expression (6) can be written as:

If $(\omega )2>0,$ then expression (8) describes damped harmonic oscillations with circular frequency $\omega1$ and damping index $(\omega )_2$. If $(\omega )_2

Comment

Many mathematical operations can be performed on complex quantities as if the quantities were real. Operations are possible if they themselves are linear and real (such are addition, multiplication, differentiation with respect to a real variable, and others, but not all). It must be remembered that complex quantities in themselves do not correspond to any physical quantities.

Vector diagram method

Let the point $A$ uniformly rotate around the circle of radius $r$ (Fig.1), its rotation speed is $(\omega )_0$.

Picture 1.

The position of the point $A$ on the circle can be specified using the angle $\varphi $. This angle is:

where $\delta =\varphi (t=0)$ is the angle of rotation of the radius vector $\overrightarrow(r)$ at the initial moment of time. If the point $M$ rotates, then its projection onto the $X$ axis moves along the diameter of the circle, making harmonic oscillations between the points $M$ $N$. The abscissa of $A$ can be written as:

In a similar way, fluctuations of any magnitude can be represented.

It is only necessary to take the image of the quantity that oscillates with the abscissa of the point $A$, which rotates uniformly around the circle. You can, of course, use the ordinate:

Remark 1

In order to represent damped oscillations, it is necessary to take not a circle, but a logarithmic spiral, which approaches the focus. If the speed of approach of a point moving in a spiral is constant and the point moves towards the focus, then the projection of this point onto the $X-axis will give formulas for damped oscillations.

Remark 2

Instead of a point, you can use a radius vector that will rotate uniformly around the origin. Then the value that performs harmonic oscillations will be depicted as a projection of this vector onto the $X$ axis. In this case, mathematical operations on the quantity $x$ are replaced by operations on a vector.

So the operation of summing two quantities:

it is more convenient to replace by the summation of two vectors (using the parallelogram rule). The vectors are chosen so that their projections on the chosen $axis X$ are the expressions $x_1\ and\ x_2$. Then the result of the summation of vectors in the projection onto the x-axis will be equal to $x_1+\ x_2$.

Example 1

Let us demonstrate the application of the method of vector diagrams.

So, let's represent complex numbers as vectors on the complex plane. A quantity that changes according to the harmonic law is represented by a vector that rotates counterclockwise around its origin with the frequency $(\omega )0$. The length of the vector is equal to the amplitude of the oscillations.

Graphical method for solving, for example, the equation:

where $Z=R+i(\omega L-\frac(1)(\omega C))$ is the impedance, we can represent it with the help of Fig.2. This figure shows a vector diagram of voltages in an AC circuit.

Figure 2.

Let's take into account that multiplication of a complex quantity by a complex unit means its rotation by an angle $90^0$ counterclockwise, and multiplication by ($-i$) by the same angle clockwise. From Fig. 2 it follows that:

where $-\frac(\pi )(2)\le \varphi \le \frac(\pi )(2).$ The change in the angle $\varphi $ depends on the relationship between the impedances of the circuit elements and the frequencies. The external voltage can change in phase, from coinciding with the voltage across the inductance, to coinciding with the voltage across the capacitance. This is usually expressed as a ratio between the voltage phases on the circuit elements and the phase of the external voltage:

    The phase of the voltage on the inductor $((U)L=i\omega LI)$ always leads the phase of the external voltage by an angle from $0$ to $\pi .$

    The phase of the voltage on the capacitance $((U)C=-\frac(iI)(\omega C)$) always lags behind the phase of the external voltage by an angle between $0$ and --$\ \pi .$

    In this case, the phase on the resistance can either lead or lag behind the phase of the external voltage by an angle between $\frac(\pi )(2)$ and $\frac(\pi )(2)$.

The vector diagram (Fig. 2) allows us to formulate the following:

    The phase of the voltage across the inductor leads the phase of the current by $\frac(\pi )(2)$.

    The capacitance voltage phase is $\frac(\eth )(2)\ $ behind the current phase.

    The phase of the voltage across the resistance coincides with the phase of the current.

Example 2

Exercise: Demonstrate that the squaring operation cannot be applied to complex quantities as to real numbers.

Decision:

Let's say that we need to square a real number $x$. Correct answer: $x^2$. Formally, we apply the complex method. Let's replace:

$x\to x+iy$. We square the resulting expression, we get:

\[(\left(x+iy\right))^2=x^2-y^2+2xyi\ \left(2.1\right).\]

The real part of expression (2.1) is:

\[(Re\left(x+iy\right))^2=Re\left(x^2-y^2+2xyi\right)=x^2-y^2\ne x^2.\]

The reason for the error is that the squaring operation is not linear.


Harmonic vibrations

Those. in fact, the sine graph is obtained from the rotation of the vector, which is described by the formula:

F(x) = A sin (ωt + φ),

Where A is the length of the vector (oscillation amplitude), φ is the initial angle (phase) of the vector at zero time, ω is the angular velocity of rotation, which is equal to:

ω=2 πf, where f is the frequency in Hertz.

As we can see, knowing the signal frequency, amplitude and angle, we can build a harmonic signal.

The magic begins when it turns out that the representation of absolutely any signal can be represented as a sum (often infinite) of various sinusoids. In other words, in the form of a Fourier series.
I will give an example from the English Wikipedia. Let's take a sawtooth signal as an example.


sawtooth signal

Its amount will be represented by the following formula:

If we sum up one by one, take first n=1, then n=2, etc., we will see how our harmonic sinusoidal signal gradually turns into a saw:

Probably the most beautiful way to illustrate this is one program that I found on the Internet. It has already been said above that the sine graph is a projection of a rotating vector, but what about more complex signals? This, oddly enough, is a projection of a set of rotating vectors, or rather their sum, and it all looks like this:


Vector drawing saw.

In general, I recommend that you follow the link yourself and try to play around with the parameters yourself, and see how the signal changes. IMHO I have not yet seen a more visual toy for understanding.

It should also be noted that there is an inverse procedure that allows you to get the frequency, amplitude and initial phase (angle) from a given signal, which is called the Fourier Transform.


Fourier series expansion of some known periodic functions (from here)

I will not dwell on it in detail, but I will show how it can be applied in life. In the list of references I will recommend where you can read more about the materiel.

Let's move on to the practical exercises!

It seems to me that every student asks a question, sitting at a lecture, for example, in matan: why do I need all this nonsense? And as a rule, having not found an answer in the foreseeable future, unfortunately, he loses interest in the subject. Therefore, I will immediately show the practical application of this knowledge, and you will already master this knowledge yourself :).

I will implement everything further on this site. I did everything, of course, under Linux, but I didn’t use any specifics, in theory the program will compile and work under other platforms.

First, let's write a program to generate an audio file. A wav file was taken as the simplest one. You can read about its structure.
In short, the wav-file structure is described as follows: a header that describes the file format, and then comes (in our case) an array of 16-bit data (pointed) with a length of: sample_rate*t seconds or 44100*t pieces.

An example was taken to form a sound file. I modified it a little, fixed the errors, and the final version with my edits is now on the github here

Let's generate a two-second sound file with a pure sine frequency of 100 Hz. To do this, we modify the program in the following way:

#define S_RATE (44100) //sampling rate #define BUF_SIZE (S_RATE*10) /* 2 second buffer */ …. int main(int argc, char * argv) ( ... float amplitude = 32000; //take the maximum possible amplitude float freq_Hz = 100; //signal frequency /* fill buffer with a sine wave */ for (i=0; i

I draw your attention to the fact that the pure sine formula corresponds to the one we talked about above. Amplitude 32000 (it was possible to take 32767) corresponds to the value that a 16-bit number can take (from minus 32767 to plus 32767).

As a result, we get the following file (you can even listen to it with any sound-producing program). Let's open this audacity file and see that the signal graph actually corresponds to a pure sine:


Pure tube sine

Let's look at the spectrum of this sine (Analysis-> Plot Spectrum)


Spectrum Plot

A clean peak is visible at 100 Hz (logarithmic scale). What is a spectrum? This is the frequency response. There is also a phase response. If you remember, I said above that in order to build a signal, you need to know its frequency, amplitude and phase? So, you can get these parameters from the signal. In this case, we have a graph of correspondence between frequencies and amplitude, and the amplitude is not in real units, but in decibels.

I understand that in order to explain how the program works, it is necessary to explain what the fast Fourier transform is, and this is at least one more sour article.

First, let's allocate arrays:

C = calloc(size_array*2, sizeof(float)); // array of rotation factors in = calloc(size_array*2, sizeof(float)); //input array out = calloc(size_array*2, sizeof(float)); //output array

Let me just say that in the program we read data into an array of length size_array (which we take from the header of the wav file).

While(fread(&value,sizeof(value),1,wav)) ( in[j]=(float)value; j+=2; if (j > 2*size_array) break; )

The array for the fast Fourier transform must be a sequence (re, im, re, im, ... re, im), where fft_size=1<< p - число точек БПФ. Объясняю нормальным языком:
it is an array of complex numbers. I'm even afraid to imagine where the complex Fourier transform is used, but in our case, the imaginary part is zero, and the real part is equal to the value of each point in the array.
Another feature of the Fast Fourier Transform is that it calculates arrays that are only multiples of powers of two. As a result, we must calculate the minimum power of two:

Int p2=(int)(log2(header.bytes_in_data/header.bytes_by_capture));

The logarithm of the number of bytes in the data divided by the number of bytes at one point.

After that, we calculate the rotation factors:

Fft_make(p2,c);// function for calculating the rotation factors for the FFT (the first parameter is a power of two, the second is an allocated array of rotation factors).

And we feed our read array into the Fourier transform:

Fft_calc(p2, c, in, out, 1); //(one means we get a normalized array).

At the output, we get complex numbers of the form (re, im, re, im, ... re, im). For those who do not know what a complex number is, I will explain. I started this article for a reason with a bunch of spinning vectors and a bunch of GIFs. So, the vector on the complex plane is determined by the real coordinate a1 and the imaginary coordinate a2. Or length (this is our amplitude Am) and angle Psi (phase).


Vector on the complex plane

Note that size_array=2^p2. The first point of the array corresponds to the frequency of 0 Hz (constant), the last point corresponds to the sampling frequency, namely 44100 Hz. As a result, we must calculate the frequency corresponding to each point, which will differ by the delta frequency:

Double delta=((float)header.frequency)/(float)size_array; //sampling rate per array size.

We allocate an array of amplitudes:

Double* ampl; ampl = calloc(size_array*2, sizeof(double));

And look at the picture: the amplitude is the length of the vector. And we have its projections on the real and imaginary axis. As a result, we will have a right-angled triangle, and here we recall the Pythagorean theorem, and calculate the length of each vector, and immediately write it to a text file:

For(i=0;i<(size_array);i+=2) { fprintf(logfile,"%.6f %f\n",cur_freq, (sqrt(out[i]*out[i]+out*out))); cur_freq+=delta; }
The result is a file that looks like this:

… 11.439514 10.943008 11.607742 56.649738 11.775970 15.652428 11.944199 21.872342 12.112427 30.635371 12.280655 30.329171 12.448883 11.932371 12.617111 20.777617 ...

Let's try!

Now we feed the resulting program that sine sound file

./fft_an ../generate_wav/sin\ 100\ Hz.wav format: 16 bits, PCM uncompressed, channel 1, freq 44100, 88200 bytes per sec, 2 bytes by capture, 2 bits per sample, 882000 bytes in data chunk= 441000 log2=18 size array=262144 wav format Max Freq = 99.928 , amp =7216.136

And we get a text file of frequency response. We build its graph using gnuplot

Build script:

#! /usr/bin/gnuplot -persist set terminal postscript eps enhanced color solid set output "result.ps" #set terminal png size 800, 600 #set output "result.png" set grid xtics ytics set log xy set xlabel "Freq, Hz" set ylabel "Amp, dB" set xrange #set yrange plot "test.txt" using 1:2 title "(!LANG:AFC" with lines linestyle 1 !}

Pay attention to the limitation in the script on the number of points in X: set xrange . We have a sampling frequency of 44100, and if we recall the Kotelnikov theorem, then the signal frequency cannot be higher than half the sampling frequency, therefore, we are not interested in a signal above 22050 Hz. Why so, I advise you to read in the special literature.
So (drum roll), run the script and see:


The spectrum of our signal

Note the sharp peak at 100 Hz. Don't forget that the axes are logarithmic! The wool on the right is, I think, Fourier transform errors (windows come to mind here).

Let's indulge, shall we?

And let's! Let's see the spectra of other signals!

Noise all around...
First, let's plot the noise spectrum. Topic about noise, random signals, etc. deserves a separate course. But we will touch on it a little. Let's modify our wav-file generation program, add one procedure:

Double d_random(double min, double max) ( return min + (max - min) / RAND_MAX * rand(); )

It will generate a random number within the given range. As a result, main will look like this:

int main(int argc, char * argv) ( int i; float amplitude = 32000; srand((unsigned int)time(0)); //initialize the random number generator for (i=0; i

Let's generate a file , (I recommend listening). Let's see it in audacity.


Signal in audacity

Let's look at the spectrum in audacity.


Range

And let's see the spectrum using our program:


Our spectrum

I want to draw attention to a very interesting fact and feature of noise - it contains the spectra of all harmonics. As can be seen from the graph, the spectrum is quite even. Typically, white noise is used for frequency analysis of the bandwidth of, for example, audio equipment. There are other types of noise: pink, blue and others. Homework is to find out how they differ.

What about compote?

And now let's see another interesting signal - a meander. I gave above a table of expansions of various signals in Fourier series, you look at how the meander is decomposed, write it down on a piece of paper, and we will continue.

To generate a meander with a frequency of 25 Hz, we once again modify our wav-file generator:

int main(int argc, char * argv) ( int i; short int meandr_value=32767; /* fill buffer with a sine wave */ for (i=0; i

As a result, we get an audio file (again, I advise you to listen), which you should immediately watch in audacity


His majesty is a meander or meander of a healthy person

Let's not languish and look at its spectrum:


meander spectrum

So far, it’s not very clear what it is ... And let's look at the first few harmonics:


First harmonics

Quite another matter! Well, let's look at the board. Look, we only have 1, 3, 5, etc., i.e. odd harmonics. We can see that we have the first harmonic of 25 Hz, the next (third) 75 Hz, then 125 Hz, etc., while our amplitude gradually decreases. Theory meets practice!
And now attention! In real life, the meander signal has an infinite amount of harmonics of higher and higher frequency, but as a rule, real electrical circuits cannot pass frequencies above a certain frequency (due to the inductance and capacitance of the tracks). As a result, you can often see the following signal on the oscilloscope screen:


Meander smoker

This picture is just like a picture from wikipedia, where not all frequencies are taken as an example of a meander, but only the first few.


The sum of the first harmonics, and how the signal changes

The meander is also actively used in radio engineering (it must be said that this is the basis of all digital technology), and it is worthwhile to understand that with long chains it can be filtered out so that your own mother does not recognize it. It is also used to check the frequency response of various devices. Another interesting fact is that TV jammers worked precisely on the principle of higher harmonics, when the microcircuit itself generated a meander of tens of MHz, and its higher harmonics could have frequencies of hundreds of MHz, just at the frequency of the TV, and higher harmonics successfully jammed the TV broadcast signal.

In general, the topic of such experiments is endless, and you can now continue it yourself.


Book

For those who do not understand what we are doing here, or vice versa, for those who understand, but want to understand even better, as well as for students studying DSP, I highly recommend this book. This is a DSP for dummies, which is the author of this post. There, the most complex concepts are told in an accessible language even for a child.

Conclusion

In conclusion, I want to say that mathematics is the queen of sciences, but without real application, many people lose interest in it. I hope this post will inspire you to study such a wonderful subject as signal processing, and in general analog circuitry (plug your ears so that your brains do not leak out!). :)
Good luck!

Tags:

Add tags

The solution of a number of issues, in particular the addition of several oscillations of the same direction (or, what is the same, the addition of several harmonic functions), is greatly facilitated and becomes clear if the oscillations are graphically depicted as vectors on a plane. The scheme obtained in this way is called a vector diagram.

Take the axis, which we denote by the letter x (Fig. 55.1). From the point O, taken on the axis, we plot a vector of length a, forming an angle a with the axis.

If we bring this vector into rotation with an angular velocity , then the projection of the end of the vector will move along the x-axis in the range from -a to +a, and the coordinate of this projection will change over time according to the law

Consequently, the projection of the end of the vector onto the axis will perform a harmonic oscillation with an amplitude equal to the length of the vector, with a circular frequency equal to the angular velocity of rotation of the vector, and with an initial phase equal to the angle formed by the vector with the axis at the initial moment of time.

From what has been said, it follows that a harmonic oscillation can be specified using a vector whose length is equal to the amplitude of the oscillation, and the direction of the vector forms an angle with the x-axis equal to the initial phase of the oscillation.

Consider the addition of two harmonic oscillations of the same direction and the same frequency. The displacement x of the oscillating body will be the sum of the displacements, which will be written as follows:

Let's represent both fluctuations with the help of vectors (fig. 55.2). Let us construct the resulting vector a according to the rules of vector addition.

It is easy to see that the projection of this vector on the x-axis is equal to the sum of the projections of the terms of the vectors:

Therefore, the vector a represents the resulting oscillation. This vector rotates with the same angular velocity as the vectors so that the resulting motion will be a harmonic oscillation with frequency amplitude a and initial phase a. It is clear from the construction that

So, the representation of harmonic oscillations by means of vectors makes it possible to reduce the addition of several oscillations to the operation of adding vectors. This technique is especially useful, for example, in optics, where light vibrations at a certain point are defined as the result of a superposition of many vibrations coming to a given point from different sections of the wave front.

Formulas (55.2) and (55.3) can, of course, be obtained by adding expressions (55.1) and performing the corresponding trigonometric transformations. But the way we have used to obtain these formulas is more simple and clear.

Let us analyze the expression (55.2) for the amplitude. If the phase difference of both oscillations is equal to zero, the amplitude of the resulting oscillation is equal to the sum of a and . If the phase difference is equal to or , i.e., both oscillations are in antiphase, then the amplitude of the resulting oscillation is equal to

If the oscillation frequencies are not the same, the vectors a and will rotate at different speeds. In this case, the resulting vector a pulsates in magnitude and rotates at a non-constant rate. Consequently, the resulting motion in this case will not be a harmonic oscillation, but some complex oscillatory process.