经验首页 前端设计 程序设计 Java相关 移动开发 数据库/运维 软件/图像 大数据/云计算 其他经验
当前位置:技术经验 » 数据库/运维 » RocketMQ » 查看文章
【RocketMQ】Dledger日志复制源码分析
来源:cnblogs  作者:shanml  时间:2023/2/27 9:44:24  对本文有异议

消息存储

【RocketMQ】消息的存储一文中提到,Broker收到消息后会调用CommitLog的asyncPutMessage方法写入消息,在DLedger模式下使用的是DLedgerCommitLog,进入asyncPutMessages方法,主要处理逻辑如下:

  1. 调用serialize方法将消息数据序列化;
  2. 构建批量消息追加请求BatchAppendEntryRequest,并设置上一步序列化的消息数据;
  3. 调用handleAppend方法提交消息追加请求,进行消息写入;
  1. public class DLedgerCommitLog extends CommitLog {
  2. @Override
  3. public CompletableFuture<PutMessageResult> asyncPutMessages(MessageExtBatch messageExtBatch) {
  4. // ...
  5. AppendMessageResult appendResult;
  6. BatchAppendFuture<AppendEntryResponse> dledgerFuture;
  7. EncodeResult encodeResult;
  8. // 将消息数据序列化
  9. encodeResult = this.messageSerializer.serialize(messageExtBatch);
  10. if (encodeResult.status != AppendMessageStatus.PUT_OK) {
  11. return CompletableFuture.completedFuture(new PutMessageResult(PutMessageStatus.MESSAGE_ILLEGAL, new AppendMessageResult(encodeResult
  12. .status)));
  13. }
  14. putMessageLock.lock();
  15. msgIdBuilder.setLength(0);
  16. long elapsedTimeInLock;
  17. long queueOffset;
  18. int msgNum = 0;
  19. try {
  20. beginTimeInDledgerLock = this.defaultMessageStore.getSystemClock().now();
  21. queueOffset = getQueueOffsetByKey(encodeResult.queueOffsetKey, tranType);
  22. encodeResult.setQueueOffsetKey(queueOffset, true);
  23. // 创建批量追加消息请求
  24. BatchAppendEntryRequest request = new BatchAppendEntryRequest();
  25. request.setGroup(dLedgerConfig.getGroup()); // 设置group
  26. request.setRemoteId(dLedgerServer.getMemberState().getSelfId());
  27. // 从EncodeResult中获取序列化的消息数据
  28. request.setBatchMsgs(encodeResult.batchData);
  29. // 调用handleAppend将数据写入
  30. AppendFuture<AppendEntryResponse> appendFuture = (AppendFuture<AppendEntryResponse>) dLedgerServer.handleAppend(request);
  31. if (appendFuture.getPos() == -1) {
  32. log.warn("HandleAppend return false due to error code {}", appendFuture.get().getCode());
  33. return CompletableFuture.completedFuture(new PutMessageResult(PutMessageStatus.OS_PAGECACHE_BUSY, new AppendMessageResult(AppendMessageStatus.UNKNOWN_ERROR)));
  34. }
  35. // ...
  36. } catch (Exception e) {
  37. log.error("Put message error", e);
  38. return CompletableFuture.completedFuture(new PutMessageResult(PutMessageStatus.UNKNOWN_ERROR, new AppendMessageResult(AppendMessageStatus.UNKNOWN_ERROR)));
  39. } finally {
  40. beginTimeInDledgerLock = 0;
  41. putMessageLock.unlock();
  42. }
  43. // ...
  44. });
  45. }
  46. }

序列化

serialize方法中,主要是将消息数据序列化到内存buffer,由于消息可能有多条,所以开启循环读取每一条数据进行序列化:

  1. 读取总数据大小、魔数和CRC校验和,这三步是为了让buffer的读取指针向后移动;
  2. 读取FLAG,记在flag变量;
  3. 读取消息长度,记在bodyLen变量;
  4. 接下来是消息内容开始位置,将开始位置记录在bodyPos变量;
  5. 从消息内容开始位置,读取消息内容计算CRC校验和;
  6. 更改buffer读取指针位置,将指针从bodyPos开始移动bodyLen个位置,也就是跳过消息内容,继续读取下一个数据;
  7. 读取消息属性长度,记录消息属性开始位置;
  8. 获取主题信息并计算数据的长度;
  9. 计算消息长度,并根据消息长度分配内存;
  10. 校验消息长度是否超过限制;
  11. 初始化内存空间,将消息的相关内容依次写入;
  12. 返回序列化结果EncodeResult
  1. class MessageSerializer {
  2. public EncodeResult serialize(final MessageExtBatch messageExtBatch) {
  3. // 设置Key:top+queueId
  4. String key = messageExtBatch.getTopic() + "-" + messageExtBatch.getQueueId();
  5. int totalMsgLen = 0;
  6. // 获取消息数据
  7. ByteBuffer messagesByteBuff = messageExtBatch.wrap();
  8. List<byte[]> batchBody = new LinkedList<>();
  9. // 获取系统标识
  10. int sysFlag = messageExtBatch.getSysFlag();
  11. int bornHostLength = (sysFlag & MessageSysFlag.BORNHOST_V6_FLAG) == 0 ? 4 + 4 : 16 + 4;
  12. int storeHostLength = (sysFlag & MessageSysFlag.STOREHOSTADDRESS_V6_FLAG) == 0 ? 4 + 4 : 16 + 4;
  13. // 分配内存
  14. ByteBuffer bornHostHolder = ByteBuffer.allocate(bornHostLength);
  15. ByteBuffer storeHostHolder = ByteBuffer.allocate(storeHostLength);
  16. // 是否有剩余数据未读取
  17. while (messagesByteBuff.hasRemaining()) {
  18. // 读取总大小
  19. messagesByteBuff.getInt();
  20. // 读取魔数
  21. messagesByteBuff.getInt();
  22. // 读取CRC校验和
  23. messagesByteBuff.getInt();
  24. // 读取FLAG
  25. int flag = messagesByteBuff.getInt();
  26. // 读取消息长度
  27. int bodyLen = messagesByteBuff.getInt();
  28. // 记录消息内容开始位置
  29. int bodyPos = messagesByteBuff.position();
  30. // 从消息内容开始位置,读取消息内容计算CRC校验和
  31. int bodyCrc = UtilAll.crc32(messagesByteBuff.array(), bodyPos, bodyLen);
  32. // 更改位置,将指针从bodyPos开始移动bodyLen个位置,也就是跳过消息内容,继续读取下一个数据
  33. messagesByteBuff.position(bodyPos + bodyLen);
  34. // 读取消息属性长度
  35. short propertiesLen = messagesByteBuff.getShort();
  36. // 记录消息属性位置
  37. int propertiesPos = messagesByteBuff.position();
  38. // 更改位置,跳过消息属性
  39. messagesByteBuff.position(propertiesPos + propertiesLen);
  40. // 获取主题信息
  41. final byte[] topicData = messageExtBatch.getTopic().getBytes(MessageDecoder.CHARSET_UTF8);
  42. // 主题字节数组长度
  43. final int topicLength = topicData.length;
  44. // 计算消息长度
  45. final int msgLen = calMsgLength(messageExtBatch.getSysFlag(), bodyLen, topicLength, propertiesLen);
  46. // 根据消息长度分配内存
  47. ByteBuffer msgStoreItemMemory = ByteBuffer.allocate(msgLen);
  48. // 如果超过了最大消息大小
  49. if (msgLen > this.maxMessageSize) {
  50. CommitLog.log.warn("message size exceeded, msg total size: " + msgLen + ", msg body size: " +
  51. bodyLen
  52. + ", maxMessageSize: " + this.maxMessageSize);
  53. throw new RuntimeException("message size exceeded");
  54. }
  55. // 更新总长度
  56. totalMsgLen += msgLen;
  57. // 如果超过了最大消息大小
  58. if (totalMsgLen > maxMessageSize) {
  59. throw new RuntimeException("message size exceeded");
  60. }
  61. // 初始化内存空间
  62. this.resetByteBuffer(msgStoreItemMemory, msgLen);
  63. // 1 写入长度
  64. msgStoreItemMemory.putInt(msgLen);
  65. // 2 写入魔数
  66. msgStoreItemMemory.putInt(DLedgerCommitLog.MESSAGE_MAGIC_CODE);
  67. // 3 写入CRC校验和
  68. msgStoreItemMemory.putInt(bodyCrc);
  69. // 4 写入QUEUEID
  70. msgStoreItemMemory.putInt(messageExtBatch.getQueueId());
  71. // 5 写入FLAG
  72. msgStoreItemMemory.putInt(flag);
  73. // 6 写入队列偏移量QUEUEOFFSET
  74. msgStoreItemMemory.putLong(0L);
  75. // 7 写入物理偏移量
  76. msgStoreItemMemory.putLong(0);
  77. // 8 写入系统标识SYSFLAG
  78. msgStoreItemMemory.putInt(messageExtBatch.getSysFlag());
  79. // 9 写入消息产生的时间戳
  80. msgStoreItemMemory.putLong(messageExtBatch.getBornTimestamp());
  81. // 10 BORNHOST
  82. resetByteBuffer(bornHostHolder, bornHostLength);
  83. msgStoreItemMemory.put(messageExtBatch.getBornHostBytes(bornHostHolder));
  84. // 11 写入消息存储时间戳
  85. msgStoreItemMemory.putLong(messageExtBatch.getStoreTimestamp());
  86. // 12 STOREHOSTADDRESS
  87. resetByteBuffer(storeHostHolder, storeHostLength);
  88. msgStoreItemMemory.put(messageExtBatch.getStoreHostBytes(storeHostHolder));
  89. // 13 RECONSUMETIMES
  90. msgStoreItemMemory.putInt(messageExtBatch.getReconsumeTimes());
  91. // 14 Prepared Transaction Offset
  92. msgStoreItemMemory.putLong(0);
  93. // 15 写入消息内容长度
  94. msgStoreItemMemory.putInt(bodyLen);
  95. if (bodyLen > 0) {
  96. // 写入消息内容
  97. msgStoreItemMemory.put(messagesByteBuff.array(), bodyPos, bodyLen);
  98. }
  99. // 16 写入主题
  100. msgStoreItemMemory.put((byte) topicLength);
  101. msgStoreItemMemory.put(topicData);
  102. // 17 写入属性长度
  103. msgStoreItemMemory.putShort(propertiesLen);
  104. if (propertiesLen > 0) {
  105. msgStoreItemMemory.put(messagesByteBuff.array(), propertiesPos, propertiesLen);
  106. }
  107. // 创建字节数组
  108. byte[] data = new byte[msgLen];
  109. msgStoreItemMemory.clear();
  110. msgStoreItemMemory.get(data);
  111. // 加入到消息集合
  112. batchBody.add(data);
  113. }
  114. // 返回结果
  115. return new EncodeResult(AppendMessageStatus.PUT_OK, key, batchBody, totalMsgLen);
  116. }
  117. }

写入消息

将消息数据序列化之后,封装了消息追加请求,调用handleAppend方法写入消息,处理逻辑如下:

  1. 获取当前的Term,判断当前Term对应的写入请求数量是否超过了最大值,如果未超过进入下一步,如果超过,设置响应状态为LEADER_PENDING_FULL表示处理的消息追加请求数量过多,拒绝处理当前请求;
  2. 校验是否是批量请求:
    • 如果是:遍历每一个消息,为消息创建DLedgerEntry对象,调用appendAsLeader将消息写入到Leader节点, 并调用waitAck为最后最后一条消息创建异步响应对象;
    • 如果不是:直接为消息创建DLedgerEntry对象,调用appendAsLeader将消息写入到Leader节点并调用waitAck创建异步响应对象;
  1. public class DLedgerServer implements DLedgerProtocolHander {
  2. @Override
  3. public CompletableFuture<AppendEntryResponse> handleAppend(AppendEntryRequest request) throws IOException {
  4. try {
  5. PreConditions.check(memberState.getSelfId().equals(request.getRemoteId()), DLedgerResponseCode.UNKNOWN_MEMBER, "%s != %s", request.getRemoteId(), memberState.getSelfId());
  6. PreConditions.check(memberState.getGroup().equals(request.getGroup()), DLedgerResponseCode.UNKNOWN_GROUP, "%s != %s", request.getGroup(), memberState.getGroup());
  7. // 校验是否是Leader节点,如果不是Leader抛出NOT_LEADER异常
  8. PreConditions.check(memberState.isLeader(), DLedgerResponseCode.NOT_LEADER);
  9. PreConditions.check(memberState.getTransferee() == null, DLedgerResponseCode.LEADER_TRANSFERRING);
  10. // 获取当前的Term
  11. long currTerm = memberState.currTerm();
  12. // 判断Pengding请求的数量
  13. if (dLedgerEntryPusher.isPendingFull(currTerm)) {
  14. AppendEntryResponse appendEntryResponse = new AppendEntryResponse();
  15. appendEntryResponse.setGroup(memberState.getGroup());
  16. // 设置响应结果LEADER_PENDING_FULL
  17. appendEntryResponse.setCode(DLedgerResponseCode.LEADER_PENDING_FULL.getCode());
  18. // 设置Term
  19. appendEntryResponse.setTerm(currTerm);
  20. appendEntryResponse.setLeaderId(memberState.getSelfId()); // 设置LeaderID
  21. return AppendFuture.newCompletedFuture(-1, appendEntryResponse);
  22. } else {
  23. if (request instanceof BatchAppendEntryRequest) { // 批量
  24. BatchAppendEntryRequest batchRequest = (BatchAppendEntryRequest) request;
  25. if (batchRequest.getBatchMsgs() != null && batchRequest.getBatchMsgs().size() != 0) {
  26. long[] positions = new long[batchRequest.getBatchMsgs().size()];
  27. DLedgerEntry resEntry = null;
  28. int index = 0;
  29. // 遍历每一个消息
  30. Iterator<byte[]> iterator = batchRequest.getBatchMsgs().iterator();
  31. while (iterator.hasNext()) {
  32. // 创建DLedgerEntry
  33. DLedgerEntry dLedgerEntry = new DLedgerEntry();
  34. // 设置消息内容
  35. dLedgerEntry.setBody(iterator.next());
  36. // 写入消息
  37. resEntry = dLedgerStore.appendAsLeader(dLedgerEntry);
  38. positions[index++] = resEntry.getPos();
  39. }
  40. // 为最后一个dLedgerEntry创建异步响应对象
  41. BatchAppendFuture<AppendEntryResponse> batchAppendFuture =
  42. (BatchAppendFuture<AppendEntryResponse>) dLedgerEntryPusher.waitAck(resEntry, true);
  43. batchAppendFuture.setPositions(positions);
  44. return batchAppendFuture;
  45. }
  46. throw new DLedgerException(DLedgerResponseCode.REQUEST_WITH_EMPTY_BODYS, "BatchAppendEntryRequest" +
  47. " with empty bodys");
  48. } else { // 普通消息
  49. DLedgerEntry dLedgerEntry = new DLedgerEntry();
  50. // 设置消息内容
  51. dLedgerEntry.setBody(request.getBody());
  52. // 写入消息
  53. DLedgerEntry resEntry = dLedgerStore.appendAsLeader(dLedgerEntry);
  54. // 等待响应,创建异步响应对象
  55. return dLedgerEntryPusher.waitAck(resEntry, false);
  56. }
  57. }
  58. } catch (DLedgerException e) {
  59. // ...
  60. }
  61. }
  62. }

pendingAppendResponsesByTerm
DLedgerEntryPusher中有一个pendingAppendResponsesByTerm成员变量,KEY为Term的值,VALUE是一个ConcurrentHashMap,KEY为消息的index(每条消息的编号,从0开始,后面会提到),ConcurrentMap的KEY为消息的index,value为此条消息写入请求的异步响应对象AppendEntryResponse:

调用isPendingFull方法的时候,会先校验当前Term是否在pendingAppendResponsesByTerm中有对应的值,如果没有,创建一个ConcurrentHashMap进行初始化,否则获取对应的ConcurrentHashMap里面数据的个数,与MaxPendingRequestsNum做对比,校验是否超过了最大值:

  1. public class DLedgerEntryPusher {
  2. // 外层的KEY为Term的值,value是一个ConcurrentMap
  3. // ConcurrentMap的KEY为消息的index,value为此条消息写入请求的异步响应对象AppendEntryResponse
  4. private Map<Long, ConcurrentMap<Long, TimeoutFuture<AppendEntryResponse>>> pendingAppendResponsesByTerm = new ConcurrentHashMap<>();
  5. public boolean isPendingFull(long currTerm) {
  6. // 校验currTerm是否在pendingAppendResponsesByTerm中
  7. checkTermForPendingMap(currTerm, "isPendingFull");
  8. // 判断当前Term对应的写入请求数量是否超过了最大值
  9. return pendingAppendResponsesByTerm.get(currTerm).size() > dLedgerConfig.getMaxPendingRequestsNum();
  10. }
  11. private void checkTermForPendingMap(long term, String env) {
  12. // 如果pendingAppendResponsesByTerm不包含
  13. if (!pendingAppendResponsesByTerm.containsKey(term)) {
  14. logger.info("Initialize the pending append map in {} for term={}", env, term);
  15. // 创建一个ConcurrentHashMap加入到pendingAppendResponsesByTerm
  16. pendingAppendResponsesByTerm.putIfAbsent(term, new ConcurrentHashMap<>());
  17. }
  18. }
  19. }

pendingAppendResponsesByTerm的值是在什么时候加入的?
在写入Leader节点之后,调用DLedgerEntryPusher的waitAck方法(后面会讲到)的时候,如果集群中有多个节点,会为当前的请求创建AppendFuture<AppendEntryResponse>响应对象加入到pendingAppendResponsesByTerm中,所以可以通过pendingAppendResponsesByTerm中存放的响应对象数量判断当前Term有多少个在等待的写入请求:

  1. // 创建响应对象
  2. AppendFuture<AppendEntryResponse> future;
  3. // 创建AppendFuture
  4. if (isBatchWait) {
  5. // 批量
  6. future = new BatchAppendFuture<>(dLedgerConfig.getMaxWaitAckTimeMs());
  7. } else {
  8. future = new AppendFuture<>(dLedgerConfig.getMaxWaitAckTimeMs());
  9. }
  10. future.setPos(entry.getPos());
  11. // 将创建的AppendFuture对象加入到pendingAppendResponsesByTerm中
  12. CompletableFuture<AppendEntryResponse> old = pendingAppendResponsesByTerm.get(entry.getTerm()).put(entry.getIndex(), future);

写入Leader

DLedgerStore有两个实现类,分别为DLedgerMemoryStore(基于内存存储)和DLedgerMmapFileStore(基于Mmap文件映射):

createDLedgerStore方法中可以看到,是根据配置的存储类型进行选择的:

  1. public class DLedgerServer implements DLedgerProtocolHander {
  2. public DLedgerServer(DLedgerConfig dLedgerConfig) {
  3. this.dLedgerConfig = dLedgerConfig;
  4. this.memberState = new MemberState(dLedgerConfig);
  5. // 根据配置中的StoreType创建DLedgerStore
  6. this.dLedgerStore = createDLedgerStore(dLedgerConfig.getStoreType(), this.dLedgerConfig, this.memberState);
  7. // ...
  8. }
  9. // 创建DLedgerStore
  10. private DLedgerStore createDLedgerStore(String storeType, DLedgerConfig config, MemberState memberState) {
  11. if (storeType.equals(DLedgerConfig.MEMORY)) {
  12. return new DLedgerMemoryStore(config, memberState);
  13. } else {
  14. return new DLedgerMmapFileStore(config, memberState);
  15. }
  16. }
  17. }
appendAsLeader

接下来以DLedgerMmapFileStore为例,看下appendAsLeader的处理逻辑:

  1. 进行Leader节点校验和磁盘已满校验;

  2. 获取日志数据buffer(dataBuffer)和索引数据buffer(indexBuffer),会先将内容写入buffer,再将buffer内容写入文件;

  3. 将entry消息内容写入dataBuffer;

  4. 设置消息的index(为每条消息进行了编号),为ledgerEndIndex + 1,ledgerEndIndex初始值为-1,新增一条消息ledgerEndIndex的值也会增1,ledgerEndIndex是随着消息的增加而递增的,写入成功之后会更新ledgerEndIndex的值,ledgerEndIndex记录最后一条成功写入消息的index;

  5. 调用dataFileList的append方法将dataBuffer内容写入日志文件,返回数据在文件中的偏移量;

  6. 将索引信息写入indexBuffer;

  7. 调用indexFileList的append方法将indexBuffer内容写入索引文件;

  8. ledgerEndIndex加1;

  9. 设置ledgerEndTerm的值为当前Term;

  10. 调用updateLedgerEndIndexAndTerm方法更新MemberState中记录的LedgerEndIndex和LedgerEndTerm的值,LedgerEndIndex会在FLUSH的时候,将内容写入到文件进行持久化保存

  1. public class DLedgerMmapFileStore extends DLedgerStore {
  2. // 日志数据buffer
  3. private ThreadLocal<ByteBuffer> localEntryBuffer;
  4. // 索引数据buffer
  5. private ThreadLocal<ByteBuffer> localIndexBuffer;
  6. @Override
  7. public DLedgerEntry appendAsLeader(DLedgerEntry entry) {
  8. // Leader校验判断当前节点是否是Leader
  9. PreConditions.check(memberState.isLeader(), DLedgerResponseCode.NOT_LEADER);
  10. // 磁盘是否已满校验
  11. PreConditions.check(!isDiskFull, DLedgerResponseCode.DISK_FULL);
  12. // 获取日志数据buffer
  13. ByteBuffer dataBuffer = localEntryBuffer.get();
  14. // 获取索引数据buffer
  15. ByteBuffer indexBuffer = localIndexBuffer.get();
  16. // 将entry消息内容写入dataBuffer
  17. DLedgerEntryCoder.encode(entry, dataBuffer);
  18. int entrySize = dataBuffer.remaining();
  19. synchronized (memberState) {
  20. PreConditions.check(memberState.isLeader(), DLedgerResponseCode.NOT_LEADER, null);
  21. PreConditions.check(memberState.getTransferee() == null, DLedgerResponseCode.LEADER_TRANSFERRING, null);
  22. // 设置消息的index,为ledgerEndIndex + 1
  23. long nextIndex = ledgerEndIndex + 1;
  24. // 设置消息的index
  25. entry.setIndex(nextIndex);
  26. // 设置Term
  27. entry.setTerm(memberState.currTerm());
  28. // 设置魔数
  29. entry.setMagic(CURRENT_MAGIC);
  30. // 设置Term的Index
  31. DLedgerEntryCoder.setIndexTerm(dataBuffer, nextIndex, memberState.currTerm(), CURRENT_MAGIC);
  32. long prePos = dataFileList.preAppend(dataBuffer.remaining());
  33. entry.setPos(prePos);
  34. PreConditions.check(prePos != -1, DLedgerResponseCode.DISK_ERROR, null);
  35. DLedgerEntryCoder.setPos(dataBuffer, prePos);
  36. for (AppendHook writeHook : appendHooks) {
  37. writeHook.doHook(entry, dataBuffer.slice(), DLedgerEntry.BODY_OFFSET);
  38. }
  39. // 将dataBuffer内容写入日志文件,返回数据的位置
  40. long dataPos = dataFileList.append(dataBuffer.array(), 0, dataBuffer.remaining());
  41. PreConditions.check(dataPos != -1, DLedgerResponseCode.DISK_ERROR, null);
  42. PreConditions.check(dataPos == prePos, DLedgerResponseCode.DISK_ERROR, null);
  43. // 将索引信息写入indexBuffer
  44. DLedgerEntryCoder.encodeIndex(dataPos, entrySize, CURRENT_MAGIC, nextIndex, memberState.currTerm(), indexBuffer);
  45. // 将indexBuffer内容写入索引文件
  46. long indexPos = indexFileList.append(indexBuffer.array(), 0, indexBuffer.remaining(), false);
  47. PreConditions.check(indexPos == entry.getIndex() * INDEX_UNIT_SIZE, DLedgerResponseCode.DISK_ERROR, null);
  48. if (logger.isDebugEnabled()) {
  49. logger.info("[{}] Append as Leader {} {}", memberState.getSelfId(), entry.getIndex(), entry.getBody().length);
  50. }
  51. // ledgerEndIndex自增
  52. ledgerEndIndex++;
  53. // 设置ledgerEndTerm的值为当前Term
  54. ledgerEndTerm = memberState.currTerm();
  55. if (ledgerBeginIndex == -1) {
  56. // 更新ledgerBeginIndex
  57. ledgerBeginIndex = ledgerEndIndex;
  58. }
  59. // 更新LedgerEndIndex和LedgerEndTerm
  60. updateLedgerEndIndexAndTerm();
  61. return entry;
  62. }
  63. }
  64. }
更新LedgerEndIndex和LedgerEndTerm

在消息写入Leader之后,会调用getLedgerEndIndexgetLedgerEndTerm法获取DLedgerMmapFileStore中记录的LedgerEndIndexLedgerEndTerm的值,然后更新到MemberState中:

  1. public abstract class DLedgerStore {
  2. protected void updateLedgerEndIndexAndTerm() {
  3. if (getMemberState() != null) {
  4. // 调用MemberState的updateLedgerIndexAndTerm进行更新
  5. getMemberState().updateLedgerIndexAndTerm(getLedgerEndIndex(), getLedgerEndTerm());
  6. }
  7. }
  8. }
  9. public class MemberState {
  10. private volatile long ledgerEndIndex = -1;
  11. private volatile long ledgerEndTerm = -1;
  12. // 更新ledgerEndIndex和ledgerEndTerm
  13. public void updateLedgerIndexAndTerm(long index, long term) {
  14. this.ledgerEndIndex = index;
  15. this.ledgerEndTerm = term;
  16. }
  17. }

waitAck

在消息写入Leader节点之后,由于Leader节点需要向Follwer节点转发日志,这个过程是异步处理的,所以会在waitAck方法中为消息的写入创建异步响应对象,主要处理逻辑如下:

  1. 调用updatePeerWaterMark更新水位线,因为Leader节点需要将日志转发给各个Follower,这个水位线其实是记录每个节点消息的复制进度,也就是复制到哪条消息,将消息的index记录下来,这里更新的是Leader节点最新写入消息的index,后面会看到Follower节点的更新;
  2. 如果集群中只有一个节点,创建AppendEntryResponse返回响应;
  3. 如果集群中有多个节点,由于日志转发是异步进行的,所以创建异步响应对象AppendFuture<AppendEntryResponse>并将创建的对象加入到pendingAppendResponsesByTerm中,pendingAppendResponsesByTerm的数据就是在这里加入的

这里再区分一下pendingAppendResponsesByTermpeerWaterMarksByTerm
pendingAppendResponsesByTerm中记录的是每条消息写入请求的异步响应对象AppendEntryResponse,因为要等待集群中大多数节点的响应,所以使用了异步处理,之后获取处理结果。
peerWaterMarksByTerm中记录的是每个节点的消息复制进度,保存的是每个节点最后一条成功写入的消息的index。

  1. public class DLedgerEntryPusher {
  2. public CompletableFuture<AppendEntryResponse> waitAck(DLedgerEntry entry, boolean isBatchWait) {
  3. // 更新当前节点最新写入消息的index
  4. updatePeerWaterMark(entry.getTerm(), memberState.getSelfId(), entry.getIndex());
  5. // 如果集群中只有一个节点
  6. if (memberState.getPeerMap().size() == 1) {
  7. // 创建响应
  8. AppendEntryResponse response = new AppendEntryResponse();
  9. response.setGroup(memberState.getGroup());
  10. response.setLeaderId(memberState.getSelfId());
  11. response.setIndex(entry.getIndex());
  12. response.setTerm(entry.getTerm());
  13. response.setPos(entry.getPos());
  14. if (isBatchWait) {
  15. return BatchAppendFuture.newCompletedFuture(entry.getPos(), response);
  16. }
  17. return AppendFuture.newCompletedFuture(entry.getPos(), response);
  18. } else {
  19. // pendingAppendResponsesByTerm
  20. checkTermForPendingMap(entry.getTerm(), "waitAck");
  21. // 响应对象
  22. AppendFuture<AppendEntryResponse> future;
  23. // 创建AppendFuture
  24. if (isBatchWait) {
  25. // 批量
  26. future = new BatchAppendFuture<>(dLedgerConfig.getMaxWaitAckTimeMs());
  27. } else {
  28. future = new AppendFuture<>(dLedgerConfig.getMaxWaitAckTimeMs());
  29. }
  30. future.setPos(entry.getPos());
  31. // 将创建的AppendFuture对象加入到pendingAppendResponsesByTerm中
  32. CompletableFuture<AppendEntryResponse> old = pendingAppendResponsesByTerm.get(entry.getTerm()).put(entry.getIndex(), future);
  33. if (old != null) {
  34. logger.warn("[MONITOR] get old wait at index={}", entry.getIndex());
  35. }
  36. return future;
  37. }
  38. }
  39. }

日志复制

消息写入Leader之后,Leader节点会将消息转发给其他Follower节点,这个过程是异步进行处理的,接下来看下消息的复制过程。

DLedgerEntryPusherstartup方法中会启动以下线程:

  1. EntryDispatcher:用于Leader节点向Follwer节点转发日志;
  2. EntryHandler:用于Follower节点处理Leader节点发送的日志;
  3. QuorumAckChecker:用于Leader节点等待Follower节点同步;

需要注意的是,Leader节点会为每个Follower节点创建EntryDispatcher转发器,每一个EntryDispatcher负责一个节点的日志转发,多个节点之间是并行处理的。

  1. public class DLedgerEntryPusher {
  2. public DLedgerEntryPusher(DLedgerConfig dLedgerConfig, MemberState memberState, DLedgerStore dLedgerStore,
  3. DLedgerRpcService dLedgerRpcService) {
  4. this.dLedgerConfig = dLedgerConfig;
  5. this.memberState = memberState;
  6. this.dLedgerStore = dLedgerStore;
  7. this.dLedgerRpcService = dLedgerRpcService;
  8. for (String peer : memberState.getPeerMap().keySet()) {
  9. if (!peer.equals(memberState.getSelfId())) {
  10. // 为集群中除当前节点以外的其他节点创建EntryDispatcher
  11. dispatcherMap.put(peer, new EntryDispatcher(peer, logger));
  12. }
  13. }
  14. // 创建EntryHandler
  15. this.entryHandler = new EntryHandler(logger);
  16. // 创建QuorumAckChecker
  17. this.quorumAckChecker = new QuorumAckChecker(logger);
  18. }
  19. public void startup() {
  20. // 启动EntryHandler
  21. entryHandler.start();
  22. // 启动QuorumAckChecker
  23. quorumAckChecker.start();
  24. // 启动EntryDispatcher
  25. for (EntryDispatcher dispatcher : dispatcherMap.values()) {
  26. dispatcher.start();
  27. }
  28. }
  29. }

EntryDispatcher(日志转发)

EntryDispatcher用于Leader节点向Follower转发日志,它继承了ShutdownAbleThread,所以会启动线程处理日志转发,入口在doWork方法中。

在doWork方法中,首先调用checkAndFreshState校验节点的状态,这一步主要是校验当前节点是否是Leader节点以及更改消息的推送类型,如果不是Leader节点结束处理,如果是Leader节点,对消息的推送类型进行判断:

  • APPEND:消息追加,用于向Follower转发消息,批量消息调用doBatchAppend,否则调用doAppend处理;
  • COMPARE:消息对比,一般出现在数据不一致的情况下,此时调用doCompare对比消息;
  1. public class DLedgerEntryPusher {
  2. // 日志转发线程
  3. private class EntryDispatcher extends ShutdownAbleThread {
  4. @Override
  5. public void doWork() {
  6. try {
  7. // 检查状态
  8. if (!checkAndFreshState()) {
  9. waitForRunning(1);
  10. return;
  11. }
  12. // 如果是APPEND类型
  13. if (type.get() == PushEntryRequest.Type.APPEND) {
  14. // 如果开启了批量追加
  15. if (dLedgerConfig.isEnableBatchPush()) {
  16. doBatchAppend();
  17. } else {
  18. doAppend();
  19. }
  20. } else {
  21. // 比较
  22. doCompare();
  23. }
  24. Thread.yield();
  25. } catch (Throwable t) {
  26. DLedgerEntryPusher.logger.error("[Push-{}]Error in {} writeIndex={} compareIndex={}", peerId, getName(), writeIndex, compareIndex, t);
  27. // 出现异常转为COMPARE
  28. changeState(-1, PushEntryRequest.Type.COMPARE);
  29. DLedgerUtils.sleep(500);
  30. }
  31. }
  32. }
  33. }

状态检查(checkAndFreshState)

如果Term与memberState记录的不一致或者LeaderId为空或者LeaderId与memberState的不一致,会调用changeState方法,将消息的推送类型更改为COMPARE,并将compareIndex置为-1

  1. public class DLedgerEntryPusher {
  2. private class EntryDispatcher extends ShutdownAbleThread {
  3. private long term = -1;
  4. private String leaderId = null;
  5. private boolean checkAndFreshState() {
  6. // 如果不是Leader节点
  7. if (!memberState.isLeader()) {
  8. return false;
  9. }
  10. // 如果Term与memberState记录的不一致或者LeaderId为空或者LeaderId与memberState的不一致
  11. if (term != memberState.currTerm() || leaderId == null || !leaderId.equals(memberState.getLeaderId())) {
  12. synchronized (memberState) { // 加锁
  13. if (!memberState.isLeader()) {
  14. return false;
  15. }
  16. PreConditions.check(memberState.getSelfId().equals(memberState.getLeaderId()), DLedgerResponseCode.UNKNOWN);
  17. term = memberState.currTerm();
  18. leaderId = memberState.getSelfId();
  19. // 更改状态为COMPARE
  20. changeState(-1, PushEntryRequest.Type.COMPARE);
  21. }
  22. }
  23. return true;
  24. }
  25. private synchronized void changeState(long index, PushEntryRequest.Type target) {
  26. logger.info("[Push-{}]Change state from {} to {} at {}", peerId, type.get(), target, index);
  27. switch (target) {
  28. case APPEND:
  29. compareIndex = -1;
  30. updatePeerWaterMark(term, peerId, index);
  31. quorumAckChecker.wakeup();
  32. writeIndex = index + 1;
  33. if (dLedgerConfig.isEnableBatchPush()) {
  34. resetBatchAppendEntryRequest();
  35. }
  36. break;
  37. case COMPARE:
  38. // 如果设置COMPARE状态成功
  39. if (this.type.compareAndSet(PushEntryRequest.Type.APPEND, PushEntryRequest.Type.COMPARE)) {
  40. compareIndex = -1; // compareIndex改为-1
  41. if (dLedgerConfig.isEnableBatchPush()) {
  42. batchPendingMap.clear();
  43. } else {
  44. pendingMap.clear();
  45. }
  46. }
  47. break;
  48. case TRUNCATE:
  49. compareIndex = -1;
  50. break;
  51. default:
  52. break;
  53. }
  54. type.set(target);
  55. }
  56. }
  57. }

Leader节点消息转发

如果处于APPEND状态,Leader节点会向Follower节点发送Append请求,将消息转发给Follower节点,doAppend方法的处理逻辑如下:

  1. 调用checkAndFreshState进行状态检查;

  2. 判断推送类型是否是APPEND,如果不是终止处理;

  3. writeIndex为待转发消息的Index,默认值为-1,判断是否大于LedgerEndIndex,如果大于调用doCommit向Follower节点发送COMMIT请求更新committedIndex(后面再说);

    这里可以看出转发日志的时候也使用了一个计数器writeIndex来记录待转发消息的index,每次根据writeIndex的值从日志中取出消息进行转发,转发成后更新writeIndex的值(自增)指向下一条数据。

  4. 如果pendingMap中的大小超过了最大限制maxPendingSize的值,或者上次检查时间超过了1000ms(有较长的时间未进行清理),进行过期数据清理(这一步主要就是为了清理数据):

    pendingMap是一个ConcurrentMap,KEY为消息的INDEX,value为该条消息向Follwer节点转发的时间(doAppendInner方法中会将数据加入到pendingMap);

    • 前面知道peerWaterMark的数据记录了每个节点的消息复制进度,这里根据Term和节点ID获取对应的复制进度(最新复制成功的消息的index)记在peerWaterMark变量中;
    • 遍历pendingMap,与peerWaterMark的值对比,peerWaterMark之前的消息表示都已成功的写入完毕,所以小于peerWaterMark说明已过期可以被清理掉,将数据从pendingMap移除达到清理空间的目的;
    • 更新检查时间lastCheckLeakTimeMs的值为当前时间;
  5. 调用doAppendInner方法转发消息;

  6. 更新writeIndex的值,做自增操作指向下一条待转发的消息index;

  1. public class DLedgerEntryPusher {
  2. private class EntryDispatcher extends ShutdownAbleThread {
  3. // 待转发消息的Index,默认值为-1
  4. private long writeIndex = -1;
  5. // KEY为消息的INDEX,value为该条消息向Follwer节点转发的时间
  6. private ConcurrentMap<Long, Long> pendingMap = new ConcurrentHashMap<>();
  7. private void doAppend() throws Exception {
  8. while (true) {
  9. // 校验状态
  10. if (!checkAndFreshState()) {
  11. break;
  12. }
  13. // 如果不是APPEND状态,终止
  14. if (type.get() != PushEntryRequest.Type.APPEND) {
  15. break;
  16. }
  17. // 判断待转发消息的Index是否大于LedgerEndIndex
  18. if (writeIndex > dLedgerStore.getLedgerEndIndex()) {
  19. doCommit(); // 向Follower节点发送COMMIT请求更新
  20. doCheckAppendResponse();
  21. break;
  22. }
  23. // 如果pendingMap中的大小超过了maxPendingSize,或者上次检查时间超过了1000ms
  24. if (pendingMap.size() >= maxPendingSize || (DLedgerUtils.elapsed(lastCheckLeakTimeMs) > 1000)) {
  25. // 根据节点peerId获取复制进度
  26. long peerWaterMark = getPeerWaterMark(term, peerId);
  27. // 遍历pendingMap
  28. for (Long index : pendingMap.keySet()) {
  29. // 如果index小于peerWaterMark
  30. if (index < peerWaterMark) {
  31. // 移除
  32. pendingMap.remove(index);
  33. }
  34. }
  35. // 更新检查时间
  36. lastCheckLeakTimeMs = System.currentTimeMillis();
  37. }
  38. if (pendingMap.size() >= maxPendingSize) {
  39. doCheckAppendResponse();
  40. break;
  41. }
  42. // 同步消息
  43. doAppendInner(writeIndex);
  44. // 更新writeIndex的值
  45. writeIndex++;
  46. }
  47. }
  48. }
  49. }
getPeerWaterMark

peerWaterMarksByTerm

peerWaterMarksByTerm中记录了日志转发的进度,KEY为Term,VALUE为ConcurrentMap,ConcurrentMap中的KEY为Follower节点的ID(peerId),VALUE为该节点已经同步完毕的最新的那条消息的index:

调用getPeerWaterMark方法的时候,首先会调用checkTermForWaterMark检查peerWaterMarksByTerm是否存在数据,如果不存在, 创建ConcurrentMap,并遍历集群中的节点,加入到ConcurrentMap,其中KEY为节点的ID,value为默认值-1,当消息成功写入Follower节点后,会调用updatePeerWaterMark更同步进度:

  1. public class DLedgerEntryPusher {
  2. // 记录Follower节点的同步进度,KEY为Term,VALUE为ConcurrentMap
  3. // ConcurrentMap中的KEY为Follower节点的ID(peerId),VALUE为该节点已经同步完毕的最新的那条消息的index
  4. private Map<Long, ConcurrentMap<String, Long>> peerWaterMarksByTerm = new ConcurrentHashMap<>();
  5. // 获取节点的同步进度
  6. public long getPeerWaterMark(long term, String peerId) {
  7. synchronized (peerWaterMarksByTerm) {
  8. checkTermForWaterMark(term, "getPeerWaterMark");
  9. return peerWaterMarksByTerm.get(term).get(peerId);
  10. }
  11. }
  12. private void checkTermForWaterMark(long term, String env) {
  13. // 如果peerWaterMarksByTerm不存在
  14. if (!peerWaterMarksByTerm.containsKey(term)) {
  15. logger.info("Initialize the watermark in {} for term={}", env, term);
  16. // 创建ConcurrentMap
  17. ConcurrentMap<String, Long> waterMarks = new ConcurrentHashMap<>();
  18. // 对集群中的节点进行遍历
  19. for (String peer : memberState.getPeerMap().keySet()) {
  20. // 初始化,KEY为节点的PEER,VALUE为-1
  21. waterMarks.put(peer, -1L);
  22. }
  23. // 加入到peerWaterMarksByTerm
  24. peerWaterMarksByTerm.putIfAbsent(term, waterMarks);
  25. }
  26. }
  27. // 更新水位线
  28. private void updatePeerWaterMark(long term, String peerId, long index) {
  29. synchronized (peerWaterMarksByTerm) {
  30. // 校验
  31. checkTermForWaterMark(term, "updatePeerWaterMark");
  32. // 如果之前的水位线小于当前的index进行更新
  33. if (peerWaterMarksByTerm.get(term).get(peerId) < index) {
  34. peerWaterMarksByTerm.get(term).put(peerId, index);
  35. }
  36. }
  37. }
  38. }
转发消息

doAppendInner的处理逻辑如下:

  1. 根据消息的index从日志获取消息Entry;
  2. 调用buildPushRequest方法构建日志转发请求PushEntryRequest,在请求中设置了消息entry、当前Term、Leader节点的commitIndex(最后一条得到集群中大多数节点响应的消息index)等信息;
  3. 调用dLedgerRpcService的push方法将请求发送给Follower节点;
  4. 将本条消息对应的index加入到pendingMap中记录消息的发送时间(key为消息的index,value为当前时间);
  5. 等待Follower节点返回响应:
    (1)如果响应状态为SUCCESS, 表示节点写入成功:
    • 从pendingMap中移除本条消息index的信息;
    • 更新当前节点的复制进度,也就是updatePeerWaterMark中的值;
    • 调用quorumAckChecker的wakeup,唤醒QuorumAckChecker线程;
      (2)如果响应状态为INCONSISTENT_STATE,表示Follower节点数据出现了不一致的情况,需要调用changeState更改状态为COMPARE;
  1. private class EntryDispatcher extends ShutdownAbleThread {
  2. private void doAppendInner(long index) throws Exception {
  3. // 根据index从日志获取消息Entry
  4. DLedgerEntry entry = getDLedgerEntryForAppend(index);
  5. if (null == entry) {
  6. return;
  7. }
  8. checkQuotaAndWait(entry);
  9. // 构建日志转发请求PushEntryRequest
  10. PushEntryRequest request = buildPushRequest(entry, PushEntryRequest.Type.APPEND);
  11. // 添加日志转发请求,发送给Follower节点
  12. CompletableFuture<PushEntryResponse> responseFuture = dLedgerRpcService.push(request);
  13. // 加入到pendingMap中,key为消息的index,value为当前时间
  14. pendingMap.put(index, System.currentTimeMillis());
  15. responseFuture.whenComplete((x, ex) -> {
  16. try {
  17. // 处理请求响应
  18. PreConditions.check(ex == null, DLedgerResponseCode.UNKNOWN);
  19. DLedgerResponseCode responseCode = DLedgerResponseCode.valueOf(x.getCode());
  20. switch (responseCode) {
  21. case SUCCESS: // 如果成功
  22. // 从pendingMap中移除
  23. pendingMap.remove(x.getIndex());
  24. // 更新updatePeerWaterMark
  25. updatePeerWaterMark(x.getTerm(), peerId, x.getIndex());
  26. // 唤醒
  27. quorumAckChecker.wakeup();
  28. break;
  29. case INCONSISTENT_STATE: // 如果响应状态为INCONSISTENT_STATE
  30. logger.info("[Push-{}]Get INCONSISTENT_STATE when push index={} term={}", peerId, x.getIndex(), x.getTerm());
  31. changeState(-1, PushEntryRequest.Type.COMPARE); // 转为COMPARE状态
  32. break;
  33. default:
  34. logger.warn("[Push-{}]Get error response code {} {}", peerId, responseCode, x.baseInfo());
  35. break;
  36. }
  37. } catch (Throwable t) {
  38. logger.error("", t);
  39. }
  40. });
  41. lastPushCommitTimeMs = System.currentTimeMillis();
  42. }
  43. private PushEntryRequest buildPushRequest(DLedgerEntry entry, PushEntryRequest.Type target) {
  44. PushEntryRequest request = new PushEntryRequest(); // 创建PushEntryRequest
  45. request.setGroup(memberState.getGroup());
  46. request.setRemoteId(peerId);
  47. request.setLeaderId(leaderId);
  48. // 设置Term
  49. request.setTerm(term);
  50. // 设置消息
  51. request.setEntry(entry);
  52. request.setType(target);
  53. // 设置commitIndex,最后一条得到集群中大多数节点响应的消息index
  54. request.setCommitIndex(dLedgerStore.getCommittedIndex());
  55. return request;
  56. }
  57. }

为了便于将Leader节点的转发和Follower节点的处理逻辑串起来,这里添加了Follower对APPEND请求的处理链接,Follower处理APPEND请求

Leader节点消息比较

处于以下两种情况之一时,会认为数据出现了不一致的情况,将状态更改为Compare:
(1)Leader节点在调用checkAndFreshState检查的时候,发现当前Term与memberState记录的不一致或者LeaderId为空或者LeaderId与memberState记录的LeaderId不一致;
(2)Follower节点在处理消息APPEND请求在进行校验的时候(Follower节点请求校验链接),发现数据出现了不一致,会在请求的响应中设置不一致的状态INCONSISTENT_STATE,通知Leader节点;

COMPARE状态下,会调用doCompare方法向Follower节点发送比较请求,处理逻辑如下:

  1. 调用checkAndFreshState校验状态;
  2. 判断是否是COMPARE或者TRUNCATE请求,如果不是终止处理;
  3. 如果compareIndex为-1(changeState方法将状态改为COMPARE时中会将compareIndex置为-1),获取LedgerEndIndex作为compareIndex的值进行更新;
  4. 如果compareIndex的值大于LedgerEndIndex或者小于LedgerBeginIndex,依旧使用LedgerEndIndex作为compareIndex的值,所以单独加一个判断条件应该是为了打印日志,与第3步做区分;
  5. 根据compareIndex获取消息entry对象,调用buildPushRequest方法构建COMPARE请求;
  6. 向Follower节点推送建COMPARE请求进行比较,这里可以快速跳转到Follwer节点对COMPARE请求的处理

状态更改为COMPARE之后,compareIndex的值会被初始化为-1,在doCompare中,会将compareIndex的值更改为Leader节点的最后一条写入的消息,也就是LedgerEndIndex的值,发给Follower节点进行对比。

向Follower节点发起请求后,等待COMPARE请求返回响应,请求中会将Follower节点最后成功写入的消息的index设置在响应对象的EndIndex变量中,第一条写入的消息记录在BeginIndex变量中:

  1. 请求响应成功:

    • 如果compareIndex与follower返回请求中的EndIndex相等,表示没有数据不一致的情况,将状态更改为APPEND;
    • 其他情况,将truncateIndex的值置为compareIndex;
  2. 如果请求中返回的EndIndex小于当前节点的LedgerBeginIndex,或者BeginIndex大于LedgerEndIndex,也就是follower与leader的index不相交时, 将truncateIndex设置为Leader的BeginIndex;

    根据代码中的注释来看,这种情况通常发生在Follower节点出现故障了很长一段时间,在此期间Leader节点删除了一些过期的消息;

  3. compareIndex比follower的BeginIndex小,将truncateIndex设置为Leader的BeginIndex;

    根据代码中的注释来看,这种情况请通常发生在磁盘出现故障的时候。

  4. 其他情况,将compareIndex的值减一,从上一条消息开始继续对比;

  5. 如果truncateIndex的值不为-1,调用doTruncate方法进行处理;

  1. public class DLedgerEntryPusher {
  2. private class EntryDispatcher extends ShutdownAbleThread {
  3. private void doCompare() throws Exception {
  4. while (true) {
  5. // 校验状态
  6. if (!checkAndFreshState()) {
  7. break;
  8. }
  9. // 如果不是COMPARE请求也不是TRUNCATE请求
  10. if (type.get() != PushEntryRequest.Type.COMPARE
  11. && type.get() != PushEntryRequest.Type.TRUNCATE) {
  12. break;
  13. }
  14. // 如果compareIndex为-1并且LedgerEndIndex为-1
  15. if (compareIndex == -1 && dLedgerStore.getLedgerEndIndex() == -1) {
  16. break;
  17. }
  18. // 如果compareIndex为-1
  19. if (compareIndex == -1) {
  20. // 获取LedgerEndIndex作为compareIndex
  21. compareIndex = dLedgerStore.getLedgerEndIndex();
  22. logger.info("[Push-{}][DoCompare] compareIndex=-1 means start to compare", peerId);
  23. } else if (compareIndex > dLedgerStore.getLedgerEndIndex() || compareIndex < dLedgerStore.getLedgerBeginIndex()) {
  24. logger.info("[Push-{}][DoCompare] compareIndex={} out of range {}-{}", peerId, compareIndex, dLedgerStore.getLedgerBeginIndex(), dLedgerStore.getLedgerEndIndex());
  25. // 依旧获取LedgerEndIndex作为compareIndex,这里应该是为了打印日志所以单独又加了一个if条件
  26. compareIndex = dLedgerStore.getLedgerEndIndex();
  27. }
  28. // 根据compareIndex获取消息
  29. DLedgerEntry entry = dLedgerStore.get(compareIndex);
  30. PreConditions.check(entry != null, DLedgerResponseCode.INTERNAL_ERROR, "compareIndex=%d", compareIndex);
  31. // 构建COMPARE请求
  32. PushEntryRequest request = buildPushRequest(entry, PushEntryRequest.Type.COMPARE);
  33. // 发送COMPARE请求
  34. CompletableFuture<PushEntryResponse> responseFuture = dLedgerRpcService.push(request);
  35. // 获取响应结果
  36. PushEntryResponse response = responseFuture.get(3, TimeUnit.SECONDS);
  37. PreConditions.check(response != null, DLedgerResponseCode.INTERNAL_ERROR, "compareIndex=%d", compareIndex);
  38. PreConditions.check(response.getCode() == DLedgerResponseCode.INCONSISTENT_STATE.getCode() || response.getCode() == DLedgerResponseCode.SUCCESS.getCode()
  39. , DLedgerResponseCode.valueOf(response.getCode()), "compareIndex=%d", compareIndex);
  40. long truncateIndex = -1;
  41. // 如果返回成功
  42. if (response.getCode() == DLedgerResponseCode.SUCCESS.getCode()) {
  43. // 如果compareIndex与 follower的EndIndex相等
  44. if (compareIndex == response.getEndIndex()) {
  45. // 改为APPEND状态
  46. changeState(compareIndex, PushEntryRequest.Type.APPEND);
  47. break;
  48. } else {
  49. // 将truncateIndex设置为compareIndex
  50. truncateIndex = compareIndex;
  51. }
  52. } else if (response.getEndIndex() < dLedgerStore.getLedgerBeginIndex()
  53. || response.getBeginIndex() > dLedgerStore.getLedgerEndIndex()) {
  54. /*
  55. The follower's entries does not intersect with the leader.
  56. This usually happened when the follower has crashed for a long time while the leader has deleted the expired entries.
  57. Just truncate the follower.
  58. */
  59. // 如果请求中返回的EndIndex小于当前节点的LedgerBeginIndex,或者BeginIndex大于LedgerEndIndex
  60. // 当follower与leader的index不相交时,这种情况通常Follower节点出现故障了很长一段时间,在此期间Leader节点删除了一些过期的消息
  61. // 将truncateIndex设置为Leader的BeginIndex
  62. truncateIndex = dLedgerStore.getLedgerBeginIndex();
  63. } else if (compareIndex < response.getBeginIndex()) {
  64. /*
  65. The compared index is smaller than the follower's begin index.
  66. This happened rarely, usually means some disk damage.
  67. Just truncate the follower.
  68. */
  69. // compareIndex比follower的BeginIndex小,通常发生在磁盘出现故障的时候
  70. // 将truncateIndex设置为Leader的BeginIndex
  71. truncateIndex = dLedgerStore.getLedgerBeginIndex();
  72. } else if (compareIndex > response.getEndIndex()) {
  73. /*
  74. The compared index is bigger than the follower's end index.
  75. This happened frequently. For the compared index is usually starting from the end index of the leader.
  76. */
  77. // compareIndex比follower的EndIndex大
  78. // compareIndexx设置为Follower的EndIndex
  79. compareIndex = response.getEndIndex();
  80. } else {
  81. /*
  82. Compare failed and the compared index is in the range of follower's entries.
  83. */
  84. // 比较失败
  85. compareIndex--;
  86. }
  87. // 如果compareIndex比当前节点的LedgerBeginIndex小
  88. if (compareIndex < dLedgerStore.getLedgerBeginIndex()) {
  89. truncateIndex = dLedgerStore.getLedgerBeginIndex();
  90. }
  91. // 如果truncateIndex的值不为-1,调用doTruncate开始删除
  92. if (truncateIndex != -1) {
  93. changeState(truncateIndex, PushEntryRequest.Type.TRUNCATE);
  94. doTruncate(truncateIndex);
  95. break;
  96. }
  97. }
  98. }
  99. }
  100. }

在doTruncate方法中,会构建TRUNCATE请求设置truncateIndex(要删除的消息的index),发送给Follower节点,通知Follower节点将数据不一致的那条消息删除,如果响应成功,可以看到接下来调用了changeState将状态改为APPEND,在changeState中,调用了updatePeerWaterMark更新节点的复制进度为出现数据不一致的那条消息的index,同时也更新了writeIndex,下次从writeIndex处重新给Follower节点发送APPEND请求进行消息写入:

  1. private class EntryDispatcher extends ShutdownAbleThread {
  2. private void doTruncate(long truncateIndex) throws Exception {
  3. PreConditions.check(type.get() == PushEntryRequest.Type.TRUNCATE, DLedgerResponseCode.UNKNOWN);
  4. DLedgerEntry truncateEntry = dLedgerStore.get(truncateIndex);
  5. PreConditions.check(truncateEntry != null, DLedgerResponseCode.UNKNOWN);
  6. logger.info("[Push-{}]Will push data to truncate truncateIndex={} pos={}", peerId, truncateIndex, truncateEntry.getPos());
  7. // 构建TRUNCATE请求
  8. PushEntryRequest truncateRequest = buildPushRequest(truncateEntry, PushEntryRequest.Type.TRUNCATE);
  9. // 向Folower节点发送TRUNCATE请求
  10. PushEntryResponse truncateResponse = dLedgerRpcService.push(truncateRequest).get(3, TimeUnit.SECONDS);
  11. PreConditions.check(truncateResponse != null, DLedgerResponseCode.UNKNOWN, "truncateIndex=%d", truncateIndex);
  12. PreConditions.check(truncateResponse.getCode() == DLedgerResponseCode.SUCCESS.getCode(), DLedgerResponseCode.valueOf(truncateResponse.getCode()), "truncateIndex=%d", truncateIndex);
  13. lastPushCommitTimeMs = System.currentTimeMillis();
  14. // 更改回APPEND状态
  15. changeState(truncateIndex, PushEntryRequest.Type.APPEND);
  16. }
  17. private synchronized void changeState(long index, PushEntryRequest.Type target) {
  18. logger.info("[Push-{}]Change state from {} to {} at {}", peerId, type.get(), target, index);
  19. switch (target) {
  20. case APPEND:
  21. compareIndex = -1;
  22. // 更新节点的复制进度,改为出现数据不一致的那条消息的index
  23. updatePeerWaterMark(term, peerId, index);
  24. // 唤醒quorumAckChecker
  25. quorumAckChecker.wakeup();
  26. // 更新writeIndex
  27. writeIndex = index + 1;
  28. if (dLedgerConfig.isEnableBatchPush()) {
  29. resetBatchAppendEntryRequest();
  30. }
  31. break;
  32. // ...
  33. }
  34. type.set(target);
  35. }
  36. }

EntryHandler

EntryHandler用于Follower节点处理Leader发送的消息请求,对请求的处理在handlePush方法中,根据请求类型的不同做如下处理:

  1. 如果是APPEND请求,将请求加入到writeRequestMap中;
  2. 如果是COMMIT请求,将请求加入到compareOrTruncateRequests;
  3. 如果是COMPARE或者TRUNCATE,将请求加入到compareOrTruncateRequests;

handlePush方法中,并没有直接处理请求,而是将不同类型的请求加入到不同的请求集合中,请求的处理是另外一个线程在doWork方法中处理的。

  1. public class DLedgerEntryPusher {
  2. private class EntryHandler extends ShutdownAbleThread {
  3. ConcurrentMap<Long, Pair<PushEntryRequest, CompletableFuture<PushEntryResponse>>> writeRequestMap = new ConcurrentHashMap<>();
  4. BlockingQueue<Pair<PushEntryRequest, CompletableFuture<PushEntryResponse>>> compareOrTruncateRequests = new ArrayBlockingQueue<Pair<PushEntryRequest, CompletableFuture<PushEntryResponse>>>(100);
  5. public CompletableFuture<PushEntryResponse> handlePush(PushEntryRequest request) throws Exception {
  6. CompletableFuture<PushEntryResponse> future = new TimeoutFuture<>(1000);
  7. switch (request.getType()) {
  8. case APPEND: // 如果是Append
  9. if (request.isBatch()) {
  10. PreConditions.check(request.getBatchEntry() != null && request.getCount() > 0, DLedgerResponseCode.UNEXPECTED_ARGUMENT);
  11. } else {
  12. PreConditions.check(request.getEntry() != null, DLedgerResponseCode.UNEXPECTED_ARGUMENT);
  13. }
  14. long index = request.getFirstEntryIndex();
  15. // 将请求加入到writeRequestMap
  16. Pair<PushEntryRequest, CompletableFuture<PushEntryResponse>> old = writeRequestMap.putIfAbsent(index, new Pair<>(request, future));
  17. if (old != null) {
  18. logger.warn("[MONITOR]The index {} has already existed with {} and curr is {}", index, old.getKey().baseInfo(), request.baseInfo());
  19. future.complete(buildResponse(request, DLedgerResponseCode.REPEATED_PUSH.getCode()));
  20. }
  21. break;
  22. case COMMIT: // 如果是提交
  23. // 加入到compareOrTruncateRequests
  24. compareOrTruncateRequests.put(new Pair<>(request, future));
  25. break;
  26. case COMPARE:
  27. case TRUNCATE:
  28. PreConditions.check(request.getEntry() != null, DLedgerResponseCode.UNEXPECTED_ARGUMENT);
  29. writeRequestMap.clear();
  30. // 加入到compareOrTruncateRequests
  31. compareOrTruncateRequests.put(new Pair<>(request, future));
  32. break;
  33. default:
  34. logger.error("[BUG]Unknown type {} from {}", request.getType(), request.baseInfo());
  35. future.complete(buildResponse(request, DLedgerResponseCode.UNEXPECTED_ARGUMENT.getCode()));
  36. break;
  37. }
  38. wakeup();
  39. return future;
  40. }
  41. }
  42. }

EntryHandler同样继承了ShutdownAbleThread,所以会启动线程执行doWork方法,在doWork方法中对请求进行了处理:

  1. 如果compareOrTruncateRequests不为空,对请求类型进行判断:

    • TRUNCATE:调用handleDoTruncate处理;
    • COMPARE:调用handleDoCompare处理;
    • COMMIT:调用handleDoCommit处理;
  2. 如果不是第1种情况,会认为是APPEND请求:
    (1)LedgerEndIndex记录了最后一条成功写入消息的index,对其 + 1表示下一条待写入消息的index;
    (2)根据待写入消息的index从writeRequestMap获取数据,如果获取为空,调用checkAbnormalFuture进行检查
    (3)获取不为空,调用handleDoAppend方法处理消息写入;
    这里可以看出,Follower是从当前记录的最后一条成功写入的index(LedgerEndIndex),进行加1来处理下一条需要写入的消息的。

  1. public class DLedgerEntryPusher {
  2. private class EntryHandler extends ShutdownAbleThread {
  3. @Override
  4. public void doWork() {
  5. try {
  6. // 判断是否是Follower
  7. if (!memberState.isFollower()) {
  8. waitForRunning(1);
  9. return;
  10. }
  11. // 如果compareOrTruncateRequests不为空
  12. if (compareOrTruncateRequests.peek() != null) {
  13. Pair<PushEntryRequest, CompletableFuture<PushEntryResponse>> pair = compareOrTruncateRequests.poll();
  14. PreConditions.check(pair != null, DLedgerResponseCode.UNKNOWN);
  15. switch (pair.getKey().getType()) {
  16. case TRUNCATE: // TRUNCATE
  17. handleDoTruncate(pair.getKey().getEntry().getIndex(), pair.getKey(), pair.getValue());
  18. break;
  19. case COMPARE: // COMPARE
  20. handleDoCompare(pair.getKey().getEntry().getIndex(), pair.getKey(), pair.getValue());
  21. break;
  22. case COMMIT: // COMMIT
  23. handleDoCommit(pair.getKey().getCommitIndex(), pair.getKey(), pair.getValue());
  24. break;
  25. default:
  26. break;
  27. }
  28. } else {
  29. // 设置消息Index,为最后一条成功写入的消息index + 1
  30. long nextIndex = dLedgerStore.getLedgerEndIndex() + 1;
  31. // 从writeRequestMap取出请求
  32. Pair<PushEntryRequest, CompletableFuture<PushEntryResponse>> pair = writeRequestMap.remove(nextIndex);
  33. // 如果获取的请求为空,调用checkAbnormalFuture进行检查
  34. if (pair == null) {
  35. checkAbnormalFuture(dLedgerStore.getLedgerEndIndex());
  36. waitForRunning(1);
  37. return;
  38. }
  39. PushEntryRequest request = pair.getKey();
  40. if (request.isBatch()) {
  41. handleDoBatchAppend(nextIndex, request, pair.getValue());
  42. } else {
  43. // 处理
  44. handleDoAppend(nextIndex, request, pair.getValue());
  45. }
  46. }
  47. } catch (Throwable t) {
  48. DLedgerEntryPusher.logger.error("Error in {}", getName(), t);
  49. DLedgerUtils.sleep(100);
  50. }
  51. }
  52. }

Follower数据不一致检查

checkAbnormalFuture
方法用于检查数据的一致性,处理逻辑如下: 1. 如果距离上次检查的时间未超过1000ms,直接返回; 2. 更新检查时间lastCheckFastForwardTimeMs的值; 3. 如果writeRequestMap为空表示目前没有写入请求,暂不需要处理; 4. 调用`checkAppendFuture`方法进行检查;
  1. public class DLedgerEntryPusher {
  2. private class EntryHandler extends ShutdownAbleThread {
  3. /**
  4. * The leader does push entries to follower, and record the pushed index. But in the following conditions, the push may get stopped.
  5. * * If the follower is abnormally shutdown, its ledger end index may be smaller than before. At this time, the leader may push fast-forward entries, and retry all the time.
  6. * * If the last ack is missed, and no new message is coming in.The leader may retry push the last message, but the follower will ignore it.
  7. * @param endIndex
  8. */
  9. private void checkAbnormalFuture(long endIndex) {
  10. // 如果距离上次检查的时间未超过1000ms
  11. if (DLedgerUtils.elapsed(lastCheckFastForwardTimeMs) < 1000) {
  12. return;
  13. }
  14. // 更新检查时间
  15. lastCheckFastForwardTimeMs = System.currentTimeMillis();
  16. // 如果writeRequestMap表示没有写入请求,暂不需要处理
  17. if (writeRequestMap.isEmpty()) {
  18. return;
  19. }
  20. // 检查
  21. checkAppendFuture(endIndex);
  22. }
  23. }
  24. }

checkAppendFuture方法中的入参endIndex,表示当前待写入消息的index,也就是当前节点记录的最后一条成功写入的index(LedgerEndIndex)值加1,方法的处理逻辑如下:

  1. minFastForwardIndex初始化为最大值,minFastForwardIndex用于找到最小的那个出现数据不一致的消息index;

  2. 遍历writeRequestMap,处理每一个正在进行中的写入请求:
    (1)由于消息可能是批量的,所以获取当前请求中的第一条消息index,记为firstEntryIndex;
    (2)获取当前请求中的最后一条消息index,记为lastEntryIndex;
    (3)如果lastEntryIndex如果小于等于endIndex的值,进行如下处理:

    • 对比请求中的消息与当前节点存储的消息是否一致,如果是批量消息,遍历请求中的每一个消息,并根据消息的index从当前节的日志中获取消息进行对比,由于endIndex之前的消息都已成功写入,对应的写入请求还在writeRequestMap中表示可能由于某些原因未能从writeRequestMap中移除,所以如果数据对比一致的情况下可以将对应的请求响应设置为完成,并从writeRequestMap中移除;如果对比不一致,进入到异常处理,构建响应请求,状态设置为INCONSISTENT_STATE,通知Leader节点出现了数据不一致的情况;

    (4)如果第一条消息firstEntryIndex与endIndex + 1相等(这里不太明白为什么不是与endIndex 相等而是需要加1),表示该请求是endIndex之后的消息请求,结束本次检查;
    (5)判断当前请求的处理时间是否超时,如果未超时,继续处理下一个请求,如果超时进入到下一步;
    (6)走到这里,如果firstEntryIndex比minFastForwardIndex小,说明出现了数据不一致的情况,此时更新minFastForwardIndex,记录最小的那个数据不一致消息的index;

  3. 如果minFastForwardIndex依旧是MAX_VALUE,表示没有数据不一致的消息,直接返回;

  4. 根据minFastForwardIndex从writeRequestMap获取请求,如果获取为空,直接返回,否则调用buildBatchAppendResponse方法构建请求响应,表示数据出现了不一致,在响应中通知Leader节点;

  1. private class EntryHandler extends ShutdownAbleThread {
  2. private void checkAppendFuture(long endIndex) {
  3. // 初始化为最大值
  4. long minFastForwardIndex = Long.MAX_VALUE;
  5. // 遍历writeRequestMap的value
  6. for (Pair<PushEntryRequest, CompletableFuture<PushEntryResponse>> pair : writeRequestMap.values()) {
  7. // 获取每个请求里面的第一条消息index
  8. long firstEntryIndex = pair.getKey().getFirstEntryIndex();
  9. // 获取每个请求里面的最后一条消息index
  10. long lastEntryIndex = pair.getKey().getLastEntryIndex();
  11. // 如果小于等于endIndex
  12. if (lastEntryIndex <= endIndex) {
  13. try {
  14. if (pair.getKey().isBatch()) { // 批量请求
  15. // 遍历所有的消息
  16. for (DLedgerEntry dLedgerEntry : pair.getKey().getBatchEntry()) {
  17. // 校验与当前节点存储的消息是否一致
  18. PreConditions.check(dLedgerEntry.equals(dLedgerStore.get(dLedgerEntry.getIndex())), DLedgerResponseCode.INCONSISTENT_STATE);
  19. }
  20. } else {
  21. DLedgerEntry dLedgerEntry = pair.getKey().getEntry();
  22. // 校验请求中的消息与当前节点存储的消息是否一致
  23. PreConditions.check(dLedgerEntry.equals(dLedgerStore.get(dLedgerEntry.getIndex())), DLedgerResponseCode.INCONSISTENT_STATE);
  24. }
  25. // 设置完成
  26. pair.getValue().complete(buildBatchAppendResponse(pair.getKey(), DLedgerResponseCode.SUCCESS.getCode()));
  27. logger.warn("[PushFallBehind]The leader pushed an batch append entry last index={} smaller than current ledgerEndIndex={}, maybe the last ack is missed", lastEntryIndex, endIndex);
  28. } catch (Throwable t) {
  29. logger.error("[PushFallBehind]The leader pushed an batch append entry last index={} smaller than current ledgerEndIndex={}, maybe the last ack is missed", lastEntryIndex, endIndex, t);
  30. // 如果出现了异常,向Leader节点发送数据不一致的请求
  31. pair.getValue().complete(buildBatchAppendResponse(pair.getKey(), DLedgerResponseCode.INCONSISTENT_STATE.getCode()));
  32. }
  33. // 处理之后从writeRequestMap移除
  34. writeRequestMap.remove(pair.getKey().getFirstEntryIndex());
  35. continue;
  36. }
  37. // 如果firstEntryIndex与endIndex + 1相等,表示该请求是endIndex之后的消息请求,结束本次检查
  38. if (firstEntryIndex == endIndex + 1) {
  39. return;
  40. }
  41. // 判断响应是否超时,如果未超时,继续处理下一个
  42. TimeoutFuture<PushEntryResponse> future = (TimeoutFuture<PushEntryResponse>) pair.getValue();
  43. if (!future.isTimeOut()) {
  44. continue;
  45. }
  46. // 如果firstEntryIndex比minFastForwardIndex小
  47. if (firstEntryIndex < minFastForwardIndex) {
  48. // 更新minFastForwardIndex
  49. minFastForwardIndex = firstEntryIndex;
  50. }
  51. }
  52. // 如果minFastForwardIndex依旧是MAX_VALUE,表示没有数据不一致的消息,直接返回
  53. if (minFastForwardIndex == Long.MAX_VALUE) {
  54. return;
  55. }
  56. // 根据minFastForwardIndex获取请求
  57. Pair<PushEntryRequest, CompletableFuture<PushEntryResponse>> pair = writeRequestMap.get(minFastForwardIndex);
  58. if (pair == null) { // 如果未获取到直接返回
  59. return;
  60. }
  61. logger.warn("[PushFastForward] ledgerEndIndex={} entryIndex={}", endIndex, minFastForwardIndex);
  62. // 向Leader返回响应,响应状态为INCONSISTENT_STATE
  63. pair.getValue().complete(buildBatchAppendResponse(pair.getKey(), DLedgerResponseCode.INCONSISTENT_STATE.getCode()));
  64. }
  65. private PushEntryResponse buildBatchAppendResponse(PushEntryRequest request, int code) {
  66. PushEntryResponse response = new PushEntryResponse();
  67. response.setGroup(request.getGroup());
  68. response.setCode(code);
  69. response.setTerm(request.getTerm());
  70. response.setIndex(request.getLastEntryIndex());
  71. // 设置当前节点的LedgerBeginIndex
  72. response.setBeginIndex(dLedgerStore.getLedgerBeginIndex());
  73. // 设置LedgerEndIndex
  74. response.setEndIndex(dLedgerStore.getLedgerEndIndex());
  75. return response;
  76. }
  77. }

Follower节点消息写入

handleDoAppend
handleDoAppend方法用于处理Append请求,将Leader转发的消息写入到日志文件: 1. 从请求中获取消息Entry,**调用appendAsFollower方法将消息写入文件**; 2. **调用updateCommittedIndex方法将Leader请求中携带的commitIndex更新到Follower本地**,后面在讲`QuorumAckChecker`时候会提到;
  1. public class DLedgerEntryPusher {
  2. private class EntryHandler extends ShutdownAbleThread {
  3. private void handleDoAppend(long writeIndex, PushEntryRequest request,
  4. CompletableFuture<PushEntryResponse> future) {
  5. try {
  6. PreConditions.check(writeIndex == request.getEntry().getIndex(), DLedgerResponseCode.INCONSISTENT_STATE);
  7. // 将消息写入日志
  8. DLedgerEntry entry = dLedgerStore.appendAsFollower(request.getEntry(), request.getTerm(), request.getLeaderId());
  9. PreConditions.check(entry.getIndex() == writeIndex, DLedgerResponseCode.INCONSISTENT_STATE);
  10. future.complete(buildResponse(request, DLedgerResponseCode.SUCCESS.getCode()));
  11. // 更新CommitIndex
  12. dLedgerStore.updateCommittedIndex(request.getTerm(), request.getCommitIndex());
  13. } catch (Throwable t) {
  14. logger.error("[HandleDoWrite] writeIndex={}", writeIndex, t);
  15. future.complete(buildResponse(request, DLedgerResponseCode.INCONSISTENT_STATE.getCode()));
  16. }
  17. }
  18. }
  19. }
写入文件

同样以DLedgerMmapFileStore为例,看下appendAsFollower方法的处理过程,前面已经讲过appendAsLeader的处理逻辑,他们的处理过程相似,基本就是将entry内容写入buffer,然后再将buffer写入数据文件和索引文件,这里不再赘述:

  1. public class DLedgerMmapFileStore extends DLedgerStore {
  2. @Override
  3. public DLedgerEntry appendAsFollower(DLedgerEntry entry, long leaderTerm, String leaderId) {
  4. PreConditions.check(memberState.isFollower(), DLedgerResponseCode.NOT_FOLLOWER, "role=%s", memberState.getRole());
  5. PreConditions.check(!isDiskFull, DLedgerResponseCode.DISK_FULL);
  6. // 获取数据Buffer
  7. ByteBuffer dataBuffer = localEntryBuffer.get();
  8. // 获取索引Buffer
  9. ByteBuffer indexBuffer = localIndexBuffer.get();
  10. // encode
  11. DLedgerEntryCoder.encode(entry, dataBuffer);
  12. int entrySize = dataBuffer.remaining();
  13. synchronized (memberState) {
  14. PreConditions.check(memberState.isFollower(), DLedgerResponseCode.NOT_FOLLOWER, "role=%s", memberState.getRole());
  15. long nextIndex = ledgerEndIndex + 1;
  16. PreConditions.check(nextIndex == entry.getIndex(), DLedgerResponseCode.INCONSISTENT_INDEX, null);
  17. PreConditions.check(leaderTerm == memberState.currTerm(), DLedgerResponseCode.INCONSISTENT_TERM, null);
  18. PreConditions.check(leaderId.equals(memberState.getLeaderId()), DLedgerResponseCode.INCONSISTENT_LEADER, null);
  19. // 写入数据文件
  20. long dataPos = dataFileList.append(dataBuffer.array(), 0, dataBuffer.remaining());
  21. PreConditions.check(dataPos == entry.getPos(), DLedgerResponseCode.DISK_ERROR, "%d != %d", dataPos, entry.getPos());
  22. DLedgerEntryCoder.encodeIndex(dataPos, entrySize, entry.getMagic(), entry.getIndex(), entry.getTerm(), indexBuffer);
  23. // 写入索引文件
  24. long indexPos = indexFileList.append(indexBuffer.array(), 0, indexBuffer.remaining(), false);
  25. PreConditions.check(indexPos == entry.getIndex() * INDEX_UNIT_SIZE, DLedgerResponseCode.DISK_ERROR, null);
  26. ledgerEndTerm = entry.getTerm();
  27. ledgerEndIndex = entry.getIndex();
  28. if (ledgerBeginIndex == -1) {
  29. ledgerBeginIndex = ledgerEndIndex;
  30. }
  31. updateLedgerEndIndexAndTerm();
  32. return entry;
  33. }
  34. }
  35. }

Compare

handleDoCompare
用于处理COMPARE请求,compareIndex为需要比较的index,处理逻辑如下:
  1. 进行校验,主要判断compareIndex与请求中的Index是否一致,以及请求类型是否是COMPARE;
  2. 根据compareIndex获取消息Entry;
  3. 构建响应内容,在响应中设置当前节点以及同步的消息的BeginIndex和EndIndex;
  1. public class DLedgerEntryPusher {
  2. private class EntryHandler extends ShutdownAbleThread {
  3. private CompletableFuture<PushEntryResponse> handleDoCompare(long compareIndex, PushEntryRequest request,
  4. CompletableFuture<PushEntryResponse> future) {
  5. try {
  6. // 校验compareIndex与请求中的Index是否一致
  7. PreConditions.check(compareIndex == request.getEntry().getIndex(), DLedgerResponseCode.UNKNOWN);
  8. // 校验请求类型是否是COMPARE
  9. PreConditions.check(request.getType() == PushEntryRequest.Type.COMPARE, DLedgerResponseCode.UNKNOWN);
  10. // 获取Entry
  11. DLedgerEntry local = dLedgerStore.get(compareIndex);
  12. // 校验请求中的Entry与本地的是否一致
  13. PreConditions.check(request.getEntry().equals(local), DLedgerResponseCode.INCONSISTENT_STATE);
  14. // 构建请求响应,这里返回成功,说明数据没有出现不一致
  15. future.complete(buildResponse(request, DLedgerResponseCode.SUCCESS.getCode()));
  16. } catch (Throwable t) {
  17. logger.error("[HandleDoCompare] compareIndex={}", compareIndex, t);
  18. future.complete(buildResponse(request, DLedgerResponseCode.INCONSISTENT_STATE.getCode()));
  19. }
  20. return future;
  21. }
  22. private PushEntryResponse buildResponse(PushEntryRequest request, int code) {
  23. // 构建请求响应
  24. PushEntryResponse response = new PushEntryResponse();
  25. response.setGroup(request.getGroup());
  26. // 设置响应状态
  27. response.setCode(code);
  28. // 设置Term
  29. response.setTerm(request.getTerm());
  30. // 如果不是COMMIT
  31. if (request.getType() != PushEntryRequest.Type.COMMIT) {
  32. // 设置Index
  33. response.setIndex(request.getEntry().getIndex());
  34. }
  35. // 设置BeginIndex
  36. response.setBeginIndex(dLedgerStore.getLedgerBeginIndex());
  37. // 设置EndIndex
  38. response.setEndIndex(dLedgerStore.getLedgerEndIndex());
  39. return response;
  40. }
  41. }
  42. }

Truncate

Follower节点对Truncate的请求处理在handleDoTruncate方法中,主要是根据Leader节点发送的truncateIndex,进行数据删除,将truncateIndex之后的消息从日志中删除:

  1. private class EntryDispatcher extends ShutdownAbleThread {
  2. // truncateIndex为待删除的消息的index
  3. private CompletableFuture<PushEntryResponse> handleDoTruncate(long truncateIndex, PushEntryRequest request,
  4. CompletableFuture<PushEntryResponse> future) {
  5. try {
  6. logger.info("[HandleDoTruncate] truncateIndex={} pos={}", truncateIndex, request.getEntry().getPos());
  7. PreConditions.check(truncateIndex == request.getEntry().getIndex(), DLedgerResponseCode.UNKNOWN);
  8. PreConditions.check(request.getType() == PushEntryRequest.Type.TRUNCATE, DLedgerResponseCode.UNKNOWN);
  9. // 进行删除
  10. long index = dLedgerStore.truncate(request.getEntry(), request.getTerm(), request.getLeaderId());
  11. PreConditions.check(index == truncateIndex, DLedgerResponseCode.INCONSISTENT_STATE);
  12. future.complete(buildResponse(request, DLedgerResponseCode.SUCCESS.getCode()));
  13. // 更新committedIndex
  14. dLedgerStore.updateCommittedIndex(request.getTerm(), request.getCommitIndex());
  15. } catch (Throwable t) {
  16. logger.error("[HandleDoTruncate] truncateIndex={}", truncateIndex, t);
  17. future.complete(buildResponse(request, DLedgerResponseCode.INCONSISTENT_STATE.getCode()));
  18. }
  19. return future;
  20. }
  21. }

Commit

前面讲到Leader节点会向Follower节点发送COMMIT请求,COMMIT请求主要是更新Follower节点本地的committedIndex的值,记录集群中最新的那条获取大多数响应的消息的index,在后面QuorumAckChecker中还会看到:

  1. private class EntryHandler extends ShutdownAbleThread {
  2. private CompletableFuture<PushEntryResponse> handleDoCommit(long committedIndex, PushEntryRequest request,
  3. CompletableFuture<PushEntryResponse> future) {
  4. try {
  5. PreConditions.check(committedIndex == request.getCommitIndex(), DLedgerResponseCode.UNKNOWN);
  6. PreConditions.check(request.getType() == PushEntryRequest.Type.COMMIT, DLedgerResponseCode.UNKNOWN);
  7. // 更新committedIndex
  8. dLedgerStore.updateCommittedIndex(request.getTerm(), committedIndex);
  9. future.complete(buildResponse(request, DLedgerResponseCode.SUCCESS.getCode()));
  10. } catch (Throwable t) {
  11. logger.error("[HandleDoCommit] committedIndex={}", request.getCommitIndex(), t);
  12. future.complete(buildResponse(request, DLedgerResponseCode.UNKNOWN.getCode()));
  13. }
  14. return future;
  15. }
  16. }

QuorumAckChecker

QuorumAckChecker用于Leader节点等待Follower节点复制完毕,处理逻辑如下:

  1. 如果pendingAppendResponsesByTerm的个数大于1,对其进行遍历,如果KEY的值与当前Term不一致,说明数据已过期,将过期数据置为完成状态并从pendingAppendResponsesByTerm中移除;

  2. 如果peerWaterMarksByTerm个数大于1,对其进行遍历,同样找出与当前TERM不一致的数据,进行清理;

  3. 获取当前Term的peerWaterMarks,peerWaterMarks记录了每个Follower节点的日志复制进度,对所有的复制进度进行排序,取出处于中间位置的那个进度值,也就是消息的index值,这里不太好理解,举个例子,假如一个Leader有5个Follower节点,当前Term为1:

    1. {
    2. "1" : { // TERM的值,对应peerWaterMarks中的Key
    3. "节点1" : "1", // 节点1复制到第1条消息
    4. "节点2" : "1", // 节点2复制到第1条消息
    5. "节点3" : "2", // 节点3复制到第2条消息
    6. "节点4" : "3", // 节点4复制到第3条消息
    7. "节点5" : "3" // 节点5复制到第3条消息
    8. }
    9. }

    对所有Follower节点的复制进度倒序排序之后的list如下:

    1. [3, 3, 2, 1, 1]

    取5 / 2 的整数部分为2,也就是下标为2处的值,对应节点3的复制进度(消息index为2),记录在quorumIndex变量中,节点4和5对应的消息进度大于消息2的,所以对于消息2,集群已经有三个节点复制成功,满足了集群中大多数节点复制成功的条件。

    如果要判断某条消息是否集群中大多数节点已经成功写入,一种常规的处理方法,对每个节点的复制进度进行判断,记录已经复制成功的节点个数,这样需要每次遍历整个节点,效率比较低,所以这里RocketMQ使用了一种更高效的方式来判断某个消息是否获得了集群中大多数节点的响应。

  4. quorumIndex之前的消息都以成功复制,此时就可以更新提交点,调用updateCommittedIndex方法更新CommitterIndex的值;

  5. 处理处于quorumIndex和lastQuorumIndex(上次quorumIndex的值)之间的数据,比如上次lastQuorumIndex的值为1,本次quorumIndex为2,由于quorumIndex之前的消息已经获得了集群中大多数节点的响应,所以处于quorumIndex和lastQuorumIndex的数据需要清理,从pendingAppendResponsesByTerm中移除,并记录数量ackNum;

  6. 如果ackNum为0,表示quorumIndex与lastQuorumIndex相等,从quorumIndex + 1处开始,判断消息的写入请求是否已经超时,如果超时设置WAIT_QUORUM_ACK_TIMEOUT并返回响应;这一步主要是为了处理超时的请求;

  7. 如果上次校验时间超过1000ms或者needCheck为true,更新节点的复制进度,遍历当前term所有的请求响应,如果小于quorumIndex,将其设置成完成状态并移除响应,表示已完成,这一步主要是处理已经写入成功的消息对应的响应对象AppendEntryResponse,是否由于某些原因未移除,如果是需要进行清理;

  8. 更新lastQuorumIndex的值;

  1. private class QuorumAckChecker extends ShutdownAbleThread {
  2. @Override
  3. public void doWork() {
  4. try {
  5. if (DLedgerUtils.elapsed(lastPrintWatermarkTimeMs) > 3000) {
  6. logger.info("[{}][{}] term={} ledgerBegin={} ledgerEnd={} committed={} watermarks={}",
  7. memberState.getSelfId(), memberState.getRole(), memberState.currTerm(), dLedgerStore.getLedgerBeginIndex(), dLedgerStore.getLedgerEndIndex(), dLedgerStore.getCommittedIndex(), JSON.toJSONString(peerWaterMarksByTerm));
  8. lastPrintWatermarkTimeMs = System.currentTimeMillis();
  9. }
  10. // 如果不是Leader
  11. if (!memberState.isLeader()) {
  12. waitForRunning(1);
  13. return;
  14. }
  15. // 获取当前的Term
  16. long currTerm = memberState.currTerm();
  17. checkTermForPendingMap(currTerm, "QuorumAckChecker");
  18. checkTermForWaterMark(currTerm, "QuorumAckChecker");
  19. // 如果pendingAppendResponsesByTerm的个数大于1
  20. if (pendingAppendResponsesByTerm.size() > 1) {
  21. // 遍历,处理与当前TERM不一致的数据
  22. for (Long term : pendingAppendResponsesByTerm.keySet()) {
  23. // 如果与当前Term一致
  24. if (term == currTerm) {
  25. continue;
  26. }
  27. // 对VALUE进行遍历
  28. for (Map.Entry<Long, TimeoutFuture<AppendEntryResponse>> futureEntry : pendingAppendResponsesByTerm.get(term).entrySet()) {
  29. // 创建AppendEntryResponse
  30. AppendEntryResponse response = new AppendEntryResponse();
  31. response.setGroup(memberState.getGroup());
  32. response.setIndex(futureEntry.getKey());
  33. response.setCode(DLedgerResponseCode.TERM_CHANGED.getCode());
  34. response.setLeaderId(memberState.getLeaderId());
  35. logger.info("[TermChange] Will clear the pending response index={} for term changed from {} to {}", futureEntry.getKey(), term, currTerm);
  36. // 设置完成
  37. futureEntry.getValue().complete(response);
  38. }
  39. // 移除
  40. pendingAppendResponsesByTerm.remove(term);
  41. }
  42. }
  43. // 处理与当前TERM不一致的数据
  44. if (peerWaterMarksByTerm.size() > 1) {
  45. for (Long term : peerWaterMarksByTerm.keySet()) {
  46. if (term == currTerm) {
  47. continue;
  48. }
  49. logger.info("[TermChange] Will clear the watermarks for term changed from {} to {}", term, currTerm);
  50. peerWaterMarksByTerm.remove(term);
  51. }
  52. }
  53. // 获取当前Term的peerWaterMarks,也就是每个Follower节点的复制进度
  54. Map<String, Long> peerWaterMarks = peerWaterMarksByTerm.get(currTerm);
  55. // 对value进行排序
  56. List<Long> sortedWaterMarks = peerWaterMarks.values()
  57. .stream()
  58. .sorted(Comparator.reverseOrder())
  59. .collect(Collectors.toList());
  60. // 取中位数
  61. long quorumIndex = sortedWaterMarks.get(sortedWaterMarks.size() / 2);
  62. // 中位数之前的消息都已同步成功,此时更新CommittedIndex
  63. dLedgerStore.updateCommittedIndex(currTerm, quorumIndex);
  64. // 获取当前Term的日志转发请求响应
  65. ConcurrentMap<Long, TimeoutFuture<AppendEntryResponse>> responses = pendingAppendResponsesByTerm.get(currTerm);
  66. boolean needCheck = false;
  67. int ackNum = 0;
  68. // 从quorumIndex开始,向前遍历,处理处于quorumIndex和lastQuorumIndex(上次quorumIndex的值)之间的数据
  69. for (Long i = quorumIndex; i > lastQuorumIndex; i--) {
  70. try {
  71. // 从responses中移除
  72. CompletableFuture<AppendEntryResponse> future = responses.remove(i);
  73. if (future == null) { // 如果响应为空,needCheck置为true
  74. needCheck = true;
  75. break;
  76. } else if (!future.isDone()) { // 如果未完成
  77. AppendEntryResponse response = new AppendEntryResponse();
  78. response.setGroup(memberState.getGroup());
  79. response.setTerm(currTerm);
  80. response.setIndex(i);
  81. response.setLeaderId(memberState.getSelfId());
  82. response.setPos(((AppendFuture) future).getPos());
  83. future.complete(response);
  84. }
  85. // 记录ACK节点的数量
  86. ackNum++;
  87. } catch (Throwable t) {
  88. logger.error("Error in ack to index={} term={}", i, currTerm, t);
  89. }
  90. }
  91. // 如果ackNum为0,表示quorumIndex与lastQuorumIndex相等
  92. // 这一步主要是为了处理超时的请求
  93. if (ackNum == 0) {
  94. // 从quorumIndex + 1处开始处理
  95. for (long i = quorumIndex + 1; i < Integer.MAX_VALUE; i++) {
  96. TimeoutFuture<AppendEntryResponse> future = responses.get(i);
  97. if (future == null) { // 如果为空,表示还没有第i条消息,结束循环
  98. break;
  99. } else if (future.isTimeOut()) { // 如果第i条消息的请求已经超时
  100. AppendEntryResponse response = new AppendEntryResponse();
  101. response.setGroup(memberState.getGroup());
  102. // 设置超时状态WAIT_QUORUM_ACK_TIMEOUT
  103. response.setCode(DLedgerResponseCode.WAIT_QUORUM_ACK_TIMEOUT.getCode());
  104. response.setTerm(currTerm);
  105. response.setIndex(i);
  106. response.setLeaderId(memberState.getSelfId());
  107. // 设置完成
  108. future.complete(response);
  109. } else {
  110. break;
  111. }
  112. }
  113. waitForRunning(1);
  114. }
  115. // 如果上次校验时间超过1000ms或者needCheck为true
  116. // 这一步主要是处理已经写入成功的消息对应的响应对象AppendEntryResponse,是否由于某些原因未移除,如果是需要进行清理
  117. if (DLedgerUtils.elapsed(lastCheckLeakTimeMs) > 1000 || needCheck) {
  118. // 更新节点的复制进度
  119. updatePeerWaterMark(currTerm, memberState.getSelfId(), dLedgerStore.getLedgerEndIndex());
  120. // 遍历当前term所有的请求响应
  121. for (Map.Entry<Long, TimeoutFuture<AppendEntryResponse>> futureEntry : responses.entrySet()) {
  122. // 如果小于quorumIndex
  123. if (futureEntry.getKey() < quorumIndex) {
  124. AppendEntryResponse response = new AppendEntryResponse();
  125. response.setGroup(memberState.getGroup());
  126. response.setTerm(currTerm);
  127. response.setIndex(futureEntry.getKey());
  128. response.setLeaderId(memberState.getSelfId());
  129. response.setPos(((AppendFuture) futureEntry.getValue()).getPos());
  130. futureEntry.getValue().complete(response);
  131. // 移除
  132. responses.remove(futureEntry.getKey());
  133. }
  134. }
  135. lastCheckLeakTimeMs = System.currentTimeMillis();
  136. }
  137. // 更新lastQuorumIndex
  138. lastQuorumIndex = quorumIndex;
  139. } catch (Throwable t) {
  140. DLedgerEntryPusher.logger.error("Error in {}", getName(), t);
  141. DLedgerUtils.sleep(100);
  142. }
  143. }
  144. }

持久化

Leader节点在某个消息的写入得到集群中大多数Follower节点的响应之后,会调用updateCommittedIndex将消息的index记在committedIndex中,上面也提到过,Follower节点在收到Leader节点的APPEND请求的时候,也会将请求中设置的Leader节点的committedIndex更新到本地。

在持久化检查点的persistCheckPoint方法中,会将LedgerEndIndex和committedIndex写入到文件(ChecktPoint)进行持久化(Broker停止或者FLUSH的时候):

ledgerEndIndex:Leader或者Follower节点最后一条成功写入的消息的index;

committedIndex:如果某条消息转发给Follower节点之后得到了集群中大多数节点的响应成功,将对应的index记在committedIndex表示该index之前的消息都已提交,已提交的消息可以被消费者消费,Leader节点会将值设置在APPEND请求中发送给Follower节点进行更新或者发送COMMIT请求进行更新;

  1. public class DLedgerMmapFileStore extends DLedgerStore {
  2. public void updateCommittedIndex(long term, long newCommittedIndex) {
  3. if (newCommittedIndex == -1
  4. || ledgerEndIndex == -1
  5. || term < memberState.currTerm()
  6. || newCommittedIndex == this.committedIndex) {
  7. return;
  8. }
  9. if (newCommittedIndex < this.committedIndex
  10. || newCommittedIndex < this.ledgerBeginIndex) {
  11. logger.warn("[MONITOR]Skip update committed index for new={} < old={} or new={} < beginIndex={}", newCommittedIndex, this.committedIndex, newCommittedIndex, this.ledgerBeginIndex);
  12. return;
  13. }
  14. // 获取ledgerEndIndex
  15. long endIndex = ledgerEndIndex;
  16. // 如果新的提交index大于最后一条消息的index
  17. if (newCommittedIndex > endIndex) {
  18. // 更新
  19. newCommittedIndex = endIndex;
  20. }
  21. Pair<Long, Integer> posAndSize = getEntryPosAndSize(newCommittedIndex);
  22. PreConditions.check(posAndSize != null, DLedgerResponseCode.DISK_ERROR);
  23. this.committedIndex = newCommittedIndex;
  24. this.committedPos = posAndSize.getKey() + posAndSize.getValue();
  25. }
  26. // 持久化检查点
  27. void persistCheckPoint() {
  28. try {
  29. Properties properties = new Properties();
  30. // 设置LedgerEndIndex
  31. properties.put(END_INDEX_KEY, getLedgerEndIndex());
  32. // 设置committedIndex
  33. properties.put(COMMITTED_INDEX_KEY, getCommittedIndex());
  34. String data = IOUtils.properties2String(properties);
  35. // 将数据写入文件
  36. IOUtils.string2File(data, dLedgerConfig.getDefaultPath() + File.separator + CHECK_POINT_FILE);
  37. } catch (Throwable t) {
  38. logger.error("Persist checkpoint failed", t);
  39. }
  40. }
  41. }

参考

【中间件兴趣圈】源码分析 RocketMQ DLedger(多副本) 之日志复制(传播)

RocketMQ版本:4.9.3

原文链接:https://www.cnblogs.com/shanml/p/17153989.html

 友情链接:直通硅谷  点职佳  北美留学生论坛

本站QQ群:前端 618073944 | Java 606181507 | Python 626812652 | C/C++ 612253063 | 微信 634508462 | 苹果 692586424 | C#/.net 182808419 | PHP 305140648 | 运维 608723728

W3xue 的所有内容仅供测试,对任何法律问题及风险不承担任何责任。通过使用本站内容随之而来的风险与本站无关。
关于我们  |  意见建议  |  捐助我们  |  报错有奖  |  广告合作、友情链接(目前9元/月)请联系QQ:27243702 沸活量
皖ICP备17017327号-2 皖公网安备34020702000426号