1
00:00:06,210 --> 00:00:06,930
Hi, guys.

2
00:00:07,110 --> 00:00:15,240
In this lesson, I'll explain that and how Kafka's data is taught and how to write a message to Kafka.

3
00:00:15,480 --> 00:00:17,490
Kafka stores data using topics.

4
00:00:17,850 --> 00:00:20,040
Each topic has its own name.

5
00:00:20,670 --> 00:00:23,430
These topics are stored in brokers.

6
00:00:23,610 --> 00:00:27,960
Topics are used for reading, as well as for writing data to Kafka.

7
00:00:28,380 --> 00:00:36,060
To write the data, we use producers producers publish data to the topics and topics, received data

8
00:00:36,060 --> 00:00:42,390
and stored that if we dive deep into the topics, we encounter partitions.

9
00:00:42,660 --> 00:00:47,520
In fact, what we call a topic is a structure consisting of partitions.

10
00:00:47,970 --> 00:00:51,750
In fact, we write the data to the partition, not the topic.

11
00:00:52,050 --> 00:00:57,600
We can specify the number of partitions for each topic according to our needs.

12
00:00:57,990 --> 00:01:01,020
We can specify it according to our events load.

13
00:01:01,500 --> 00:01:04,590
Each partition has a unique number must be called.

14
00:01:04,590 --> 00:01:07,590
Partition actually uses the log principle.

15
00:01:07,800 --> 00:01:16,800
So the things we write are constantly added to the back of the partitions so we cannot edit to the front

16
00:01:16,800 --> 00:01:17,610
of the partition.

17
00:01:17,700 --> 00:01:26,250
Data is right, and to do partition in the order in which we sent it into item data cannot be changed.

18
00:01:26,490 --> 00:01:27,990
So they are immutable.

19
00:01:28,230 --> 00:01:30,540
The data is stored in the hard disk.

20
00:01:30,810 --> 00:01:33,000
Data is not stored forever.

21
00:01:33,270 --> 00:01:35,910
Kafka has two different storage configurations.

22
00:01:36,120 --> 00:01:39,690
One of them is time based storage configuration.

23
00:01:40,050 --> 00:01:46,290
These stores the data for seven days, so data older than seven days will be automatically deleted.

24
00:01:46,680 --> 00:01:48,180
Of course, we can change it.

25
00:01:48,480 --> 00:01:52,650
The second one is data size based storage configuration.

26
00:01:52,920 --> 00:02:00,540
For example, if we set it as 50 gigabyte, the data starts to be truncated, then the sum of the data

27
00:02:00,540 --> 00:02:02,460
exists 50 gigabytes.

28
00:02:02,640 --> 00:02:09,930
This method is not preferred one because the results are unpredictable and all set is assigned to the

29
00:02:09,930 --> 00:02:11,520
each message in the partition.

30
00:02:11,670 --> 00:02:14,160
We can read these data using this offset.

31
00:02:14,460 --> 00:02:18,360
Actually, offset is used to determine the position of the data.

32
00:02:18,990 --> 00:02:22,830
Now, let's see how we can use partitions in writing.

33
00:02:23,280 --> 00:02:28,770
Let's assume that we have three different partitions and two, we want to send data.

34
00:02:28,920 --> 00:02:32,370
OK, but how do we decide which to write?

35
00:02:32,700 --> 00:02:37,200
Here we can design the partitions as we want with the decision we made.

36
00:02:37,590 --> 00:02:41,580
For example, if we don't specify anything, then sending a message.

37
00:02:41,880 --> 00:02:48,540
Kafka uses the round robin method, so it divides the incoming message into all partitions in order.

38
00:02:48,870 --> 00:02:55,650
So it sends the first incoming message to the first partition and the second message to the second partition

39
00:02:56,040 --> 00:03:00,180
and the third message to the third partition and goes on like that.

40
00:03:00,420 --> 00:03:03,360
The other partition design method is record key.

41
00:03:03,630 --> 00:03:06,480
We can give a key to the message that we sent.

42
00:03:06,780 --> 00:03:12,660
Kafka uses these key values and writes the same key witness to the same partition.

43
00:03:12,960 --> 00:03:20,070
For example, let's assume that we have different partitions for payment events, and these partitions

44
00:03:20,310 --> 00:03:25,500
expects different payment types, such as credit card, PayPal or other.

45
00:03:25,800 --> 00:03:30,870
Then we sent the message with the credit card it is sent to the first partition.

46
00:03:31,230 --> 00:03:37,950
The PayPal message to the second partition and the message with the other key to the third partition.

47
00:03:38,310 --> 00:03:45,090
We actually aggregate the data using these methods so we can say that we can aggregate, write and data

48
00:03:45,150 --> 00:03:48,210
according to certain characteristics in partitions.

49
00:03:49,050 --> 00:03:53,110
Let's continue with the partition design methods with record key.

50
00:03:53,130 --> 00:03:57,210
We can provide sorting or event sourcing in the partitions.

51
00:03:57,480 --> 00:04:05,040
For example, we can send customer ID is a record key, and we can store each user's events sequentially

52
00:04:05,160 --> 00:04:06,330
in the same partition.

53
00:04:06,450 --> 00:04:09,750
So here we are, actually doing events sourcing.

54
00:04:10,350 --> 00:04:12,230
So with using can record key.

55
00:04:12,300 --> 00:04:15,450
We can use partitions according to our needs.

56
00:04:16,110 --> 00:04:19,140
Another benefit of partitions is parallelism.

57
00:04:19,710 --> 00:04:22,920
Kafka topics are divided into a number of partitions.

58
00:04:23,130 --> 00:04:27,870
Partitions allow us to paralyze a topic by splitting the data.

59
00:04:28,230 --> 00:04:33,750
We can create multiple partitions and we can increase the performance with parallelism.

60
00:04:33,960 --> 00:04:41,250
Consumers can also be paralyzed so that multiple consumers can read from multiple partitions in a topic

61
00:04:41,400 --> 00:04:44,160
allowing for a very high message processing throughput.

62
00:04:44,910 --> 00:04:49,590
OK, now we have learned the writing mechanism in the Kafka.

63
00:04:50,190 --> 00:04:56,820
From now on, we can check the reading mechanism in Kafka, but we can handle it in the next lesson.

64
00:04:57,210 --> 00:04:58,980
That's all for this lesson.

65
00:04:59,250 --> 00:04:59,730
Thank you.

