# ======================== Elasticsearch Configuration ========================= # # NOTE: Elasticsearch comes with reasonable defaults for most settings. # Before you set out to tweak and tune the configuration, make sure you # understand what are you trying to accomplish and the consequences. # # The primary way of configuring a node is via this file. This template lists # the most important settings you may want to configure for a production cluster. # # Please see the documentation for further information on configuration options: # <http://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration.html> # # ---------------------------------- Cluster ----------------------------------- # # Use a descriptive name for your cluster: # # cluster.name: my-application # # ------------------------------------ Node ------------------------------------ # # Use a descriptive name for the node: # # node.name: node-1 # # Add custom attributes to the node: # # node.rack: r1 # # ----------------------------------- Paths ------------------------------------ # # Path to directory where to store the data (separate multiple locations by comma): # # path.data: /path/to/data # # Path to log files: # # path.logs: /path/to/logs # # ----------------------------------- Memory ----------------------------------- # # Lock the memory on startup: # # bootstrap.memory_lock: true # # Make sure that the `ES_HEAP_SIZE` environment variable is set to about half the memory # available on the system and that the owner of the process is allowed to use this limit. # # Elasticsearch performs poorly when the system is swapping the memory. # # ---------------------------------- Network ----------------------------------- # # Set the bind address to a specific IP (IPv4 or IPv6): # network.host: 172.16.178.129 # # Set a custom port for HTTP: # # http.port: 9200 # # For more information, see the documentation at: # <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html> # # --------------------------------- Discovery ---------------------------------- # # Pass an initial list of hosts to perform discovery when new node is started: # The default list of hosts is ["127.0.0.1", "[::1]"] # # discovery.zen.ping.unicast.hosts: ["host1", "host2"] # # Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 + 1): # # discovery.zen.minimum_master_nodes: 3 # # For more information, see the documentation at: # <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery.html> # # ---------------------------------- Gateway ----------------------------------- # # Block initial recovery after a full cluster restart until N nodes are started: # # gateway.recover_after_nodes: 3 # # For more information, see the documentation at: # <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-gateway.html> # # ---------------------------------- Various ----------------------------------- # # Disable starting multiple nodes on a single system: # # node.max_local_storage_nodes: 1 # # Require explicit names when deleting indices: # # action.destructive_requires_name: true
# you can override this using by setting a system property, for example -Des.logger.level=DEBUG es.logger.level: INFO rootLogger: ${es.logger.level}, console, file logger: # log action execution errors for easier debugging action: DEBUG
# deprecation logging, turn to DEBUG to see them deprecation: INFO, deprecation_log_file
# reduce the logging for aws, too much is logged under the default INFO com.amazonaws: WARN # aws will try to do some sketchy JMX stuff, but its not needed. com.amazonaws.jmx.SdkMBeanRegistrySupport: ERROR com.amazonaws.metrics.AwsSdkMetrics: ERROR
# Use the following log4j-extras RollingFileAppender to enable gzip compression of log files. # For more information see https://logging.apache.org/log4j/extras/apidocs/org/apache/log4j/rolling/RollingFileAppender.html #file: #type: extrasRollingFile #file: ${path.logs}/${cluster.name}.log #rollingPolicy: timeBased #rollingPolicy.FileNamePattern: ${path.logs}/${cluster.name}.log.%d{yyyy-MM-dd}.gz #layout: #type: pattern #conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"
$ sudo docker run -dti --name=elasticsearch --network=host -v /home/tony/Desktop/elasticsearch-2.4.6/config:/usr/share/elasticsearch/config delron/elasticsearch-ik:2.4.6-1.0
看下运行状态,可以看出ElasticSearch和之前文章里讲的fastdfs都在正常运行中。
1 2 3 4 5
tony@ubuntu:~/Desktop/elasticsearch-2.4.6/config$ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 519432a1232b delron/elasticsearch-ik:2.4.6-1.0 "/docker-entrypoint.…" 2 days ago Up About an hour elasticsearch d7d3a976fa77 delron/fastdfs "/usr/bin/start1.sh …" 5 days ago Up About an hour storage 470dbf71de20 delron/fastdfs "/usr/bin/start1.sh …" 5 days ago Up About an hour tracker