我们都知道 Elasticsearch 的搜索是近实时的,数据写入后,立即搜索(不通过 id)文档是搜不到的。这一切的原因要归于 lucene 所提供的 API,因为 lucene 的 API 就是非实时的,Elasticsearch 在 lucene 之上盖房子,通过一些增强,实现了查询的近实时和 id 查询的实时性。本文就来看看这个近实时的原理。
对应每个索引分片,ES 会创建一个定时任务,很显然 AsyncRefreshTask 是这个定时任务
1 // org.elasticsearch.index.IndexService.AsyncRefreshTask 2 static final class AsyncRefreshTask extends BaseAsyncTask { 3 4 AsyncRefreshTask(IndexService indexService) { 5 super(indexService, indexService.getIndexSettings().getRefreshInterval()); 6 } 7 8 @Override 9 protected void runInternal() { 10 indexService.maybeRefreshEngine(false); 11 } 12 13 @Override 14 protected String getThreadPool() { 15 return ThreadPool.Names.REFRESH; 16 } 17 18 @Override 19 public String toString() { 20 return "refresh"; 21 } 22 }
那么这个定时任务的执行间隔是多少呢,是 1 秒钟
1 // org.elasticsearch.index.IndexSettings 2 public static final TimeValue DEFAULT_REFRESH_INTERVAL = new TimeValue(1, TimeUnit.SECONDS); 3 public static final Setting<TimeValue> INDEX_REFRESH_INTERVAL_SETTING = Setting.timeSetting( 4 "index.refresh_interval", 5 DEFAULT_REFRESH_INTERVAL, 6 new TimeValue(-1, TimeUnit.MILLISECONDS), 7 Property.Dynamic, 8 Property.IndexScope 9 );
定时任务的触发执行在哪呢,下面代码第 31 行,线程池放入任务
1 abstract static class BaseAsyncTask extends AbstractAsyncTask { 2 3 protected final IndexService indexService; 4 5 BaseAsyncTask(final IndexService indexService, final TimeValue interval) { 6 super(indexService.logger, indexService.threadPool, interval, true); 7 this.indexService = indexService; 8 rescheduleIfNecessary(); 9 } 10 11 @Override 12 protected boolean mustReschedule() { 13 // don't re-schedule if the IndexService instance is closed or if the index is closed 14 return indexService.closed.get() == false 15 && indexService.indexSettings.getIndexMetadata().getState() == IndexMetadata.State.OPEN; 16 } 17 } 18 19 // org.elasticsearch.common.util.concurrent.AbstractAsyncTask#rescheduleIfNecessary 20 public synchronized void rescheduleIfNecessary() { 21 if (isClosed()) { 22 return; 23 } 24 if (cancellable != null) { 25 cancellable.cancel(); 26 } 27 if (interval.millis() > 0 && mustReschedule()) { 28 if (logger.isTraceEnabled()) { 29 logger.trace("scheduling {} every {}", toString(), interval); 30 } 31 cancellable = threadPool.schedule(this, interval, getThreadPool()); 32 isScheduledOrRunning = true; 33 } else { 34 logger.trace("scheduled {} disabled", toString()); 35 cancellable = null; 36 isScheduledOrRunning = false; 37 } 38 }
具体的执行,是否执行 refresh 需要满足一系列条件,这里着重看 getEngine().refreshNeeded()
// org.elasticsearch.index.IndexService#maybeRefreshEngine private void maybeRefreshEngine(boolean force) { if (indexSettings.getRefreshInterval().millis() > 0 || force) { for (IndexShard shard : this.shards.values()) { try { shard.scheduledRefresh(); } catch (IndexShardClosedException | AlreadyClosedException ex) { // fine - continue; } } } } // org.elasticsearch.index.shard.IndexShard#scheduledRefresh /** * Executes a scheduled refresh if necessary. * * @return <code>true</code> iff the engine got refreshed otherwise <code>false</code> */ public boolean scheduledRefresh() { verifyNotClosed(); boolean listenerNeedsRefresh = refreshListeners.refreshNeeded(); if (isReadAllowed() && (listenerNeedsRefresh || getEngine().refreshNeeded())) { if (listenerNeedsRefresh == false // if we have a listener that is waiting for a refresh we need to force it && isSearchIdle() && indexSettings.isExplicitRefresh() == false && active.get()) { // it must be active otherwise we might not free up segment memory once the shard became inactive // lets skip this refresh since we are search idle and // don't necessarily need to refresh. the next searcher access will register a refreshListener and that will // cause the next schedule to refresh. final Engine engine = getEngine(); engine.maybePruneDeletes(); // try to prune the deletes in the engine if we accumulated some setRefreshPending(engine); return false; } else { if (logger.isTraceEnabled()) { logger.trace("refresh with source [schedule]"); } return getEngine().maybeRefresh("schedule"); } } final Engine engine = getEngine(); engine.maybePruneDeletes(); // try to prune the deletes in the engine if we accumulated some return false; }
是否需要 refresh,最终调用的是 lucene 中 DirectoryReader 的 isCurrent() 方法,通过方法签名可以看出,当索引发生了新的变化后,该方法返回 true
1 // org.elasticsearch.index.engine.Engine#refreshNeeded 2 public boolean refreshNeeded() { 3 if (store.tryIncRef()) { 4 /* 5 we need to inc the store here since we acquire a searcher and that might keep a file open on the 6 store. this violates the assumption that all files are closed when 7 the store is closed so we need to make sure we increment it here 8 */ 9 try { 10 try (Searcher searcher = acquireSearcher("refresh_needed", SearcherScope.EXTERNAL)) { 11 return searcher.getDirectoryReader().isCurrent() == false; 12 } 13 } catch (IOException e) { 14 logger.error("failed to access searcher manager", e); 15 failEngine("failed to access searcher manager", e); 16 throw new EngineException(shardId, "failed to access searcher manager", e); 17 } finally { 18 store.decRef(); 19 } 20 } 21 return false; 22 } 23 24 25 26 // org.apache.lucene.index.DirectoryReader#isCurrent 27 /** 28 **Check whether any new changes have occurred to the index since this reader was opened. 29 If this reader was created by calling open, then this method checks if any further commits (see IndexWriter.commit) have occurred in the directory. 30 If instead this reader is a near real-time reader (ie, obtained by a call to open(IndexWriter), or by calling openIfChanged on a near real-time reader), then this method checks if either a new commit has occurred, or any new uncommitted changes have taken place via the writer. Note that even if the writer has only performed merging, this method will still return false. 31 In any event, if this returns false, you should call openIfChanged to get a new reader that sees the changes. 32 **/ 33 public abstract boolean isCurrent() throws IOException;
写惯了 CRUD 业务代码的我,看到 IndexService 想当然以为它管理着所有的索引,仔细阅读了下源码,ES 中一个索引对应一个 IndexService 实例,一个 Engine 实例。好,接下来我们看刷新操作到底做了什么,最终调用的是 lucene 中 DirectoryReader 的 openIfChanged 方法,调用该方法后,返回的新 reader 可以搜索到新文档。
// org.elasticsearch.index.engine.InternalEngine#maybeRefresh @Override public boolean maybeRefresh(String source) throws EngineException { return refresh(source, SearcherScope.EXTERNAL, false); } // org.elasticsearch.index.engine.InternalEngine#refresh final boolean refresh(String source, SearcherScope scope, boolean block) throws EngineException { // both refresh types will result in an internal refresh but only the external will also // pass the new reader reference to the external reader manager. final long localCheckpointBeforeRefresh = localCheckpointTracker.getProcessedCheckpoint(); boolean refreshed; try { // refresh does not need to hold readLock as ReferenceManager can handle correctly if the engine is closed in mid-way. if (store.tryIncRef()) { // increment the ref just to ensure nobody closes the store during a refresh try { // even though we maintain 2 managers we really do the heavy-lifting only once. // the second refresh will only do the extra work we have to do for warming caches etc. ReferenceManager<ElasticsearchDirectoryReader> referenceManager = getReferenceManager(scope); // it is intentional that we never refresh both internal / external together if (block) { referenceManager.maybeRefreshBlocking(); refreshed = true; } else { refreshed = referenceManager.maybeRefresh(); } } finally { store.decRef(); } if (refreshed) { lastRefreshedCheckpointListener.updateRefreshedCheckpoint(localCheckpointBeforeRefresh); } } else { refreshed = false; } } catch (AlreadyClosedException e) { failOnTragicEvent(e); throw e; } catch (Exception e) { try { failEngine("refresh failed source[" + source + "]", e); } catch (Exception inner) { e.addSuppressed(inner); } throw new RefreshFailedEngineException(shardId, e); } assert refreshed == false || lastRefreshedCheckpoint() >= localCheckpointBeforeRefresh : "refresh checkpoint was not advanced; " + "local_checkpoint=" + localCheckpointBeforeRefresh + " refresh_checkpoint=" + lastRefreshedCheckpoint(); // TODO: maybe we should just put a scheduled job in threadPool? // We check for pruning in each delete request, but we also prune here e.g. in case a delete burst comes in and then no more deletes // for a long time: maybePruneDeletes(); mergeScheduler.refreshConfig(); return refreshed; } //org.elasticsearch.index.engine.ElasticsearchReaderManager#refreshIfNeeded class ElasticsearchReaderManager extends ReferenceManager<ElasticsearchDirectoryReader> { @Override protected ElasticsearchDirectoryReader refreshIfNeeded(ElasticsearchDirectoryReader referenceToRefresh) throws IOException { return (ElasticsearchDirectoryReader) DirectoryReader.openIfChanged(referenceToRefresh); } }
代码很长,结论很简单,ES 通过定时任务,定期对索引进行 refresh,将非实时的搜索增强为近实时。
标签:engine,index,return,indexService,refresh,Elasticsearch,实时,false,底层 From: https://www.cnblogs.com/allenwas3/p/18252857