Elasticsearch And Mongodb Distributed Cluster Environment Data Synchronization

by LatashiaLuker55 posted Oct 09, 2017
?

단축키

Prev이전 문서

Next다음 문서

ESC닫기

크게 작게 위로 아래로 댓글로 가기 인쇄
Extra Form
모집종류 latashia.luker@aol.com
작업종류 26,5445,24809
free real time sync software#curl way to build river (and build resume as an index)

curl -XPUT 'localhost: 9200 / _river / tbJobResume / _meta' -d '



'type': 'mongodb',

'mongodb':

\u0026 nbsp; \u0026 nbsp; 'host': '192.168.225.131'

\u0026 nbsp; \u0026 nbsp; 'port': '37017'

\u0026 nbsp; \u0026 nbsp; 'db': 'MongoModelJobResume',

\u0026 nbsp; \u0026 nbsp; 'collection': 'tbJobResume'

,

'index':

\u0026 nbsp; \u0026 nbsp; 'name': 'resume'

\u0026 nbsp; \u0026 nbsp; 'type': 'tbJobResume' '

Note: _river / tbJobResume \u0026 nbsp; \u0026 nbsp; tbJobResume \u0026 nbsp; I use the table name, the creation of each index when the best different-d behind the 'content' two single quotation marks do not lose

type is followed by mongodb mongodb database

mongodb: ip, port, db (name), collection do not have to explain

index: name to create the index name, preferably lowercase (should be required)

index: type collection The name of the data collection corresponding to the index

verification:

curl 'http: // localhost: 9200 / _river / tbJobResume / _meta'

This will build a resume bit, mongodb if there is data will be synchronized over

Special Note: If tbJobResume table field is a geographical coordinates, need to map into geo_point type, in the creation of the index before mapping, as follows:

curl -XPUT 'http: // localhost: 9200 / resume' -d '



\u0026 nbsp; \u0026 nbsp; 'mappings':

\u0026 nbsp; \u0026 nbsp; 'tbJobResume':

\u0026 nbsp; \u0026 nbsp; \u0026 nbsp; 'properties':

\u0026 nbsp; \u0026 nbsp; \u0026 nbsp; \u0026 nbsp; 'Location':

\u0026 nbsp; \u0026 nbsp; \u0026 nbsp; \u0026 nbsp; \u0026 nbsp; 'type': 'geo_point'

\u0026 nbsp; \u0026 nbsp; \u0026 nbsp; \u0026 nbsp;

\u0026 nbsp; \u0026 nbsp; \u0026 nbsp;

\u0026 nbsp; \u0026 nbsp;

\u0026 nbsp;

'

After setting up the index is created

--- The following is the construction of another index ---

curl -XPUT 'localhost: 9200 / _river / tbJobPosition / _meta' -d '



'type': 'mongodb',

'mongodb':

\u0026 nbsp; \u0026 nbsp; 'host': '192.168.225.131'

\u0026 nbsp; \u0026 nbsp; 'port': '37017'

\u0026 nbsp; \u0026 nbsp; 'db': 'MongoModelJob',

\u0026 nbsp; \u0026 nbsp; 'collection': 'tbJobPosition'

,

'index':

\u0026 nbsp; \u0026 nbsp; 'name': 'position',

\u0026 nbsp; \u0026 nbsp; 'type': 'tbJobPosition' '

curl 'http: // localhost: 9200 / _river / tbJobPosition / _meta'

---------------

#curl put the index data

curl -XPUT 'http: // localhost: 9200 / customer / tbCustomer / 1' -d '



\u0026 nbsp; \u0026 nbsp; '_ id': 1,

\u0026 nbsp; \u0026 nbsp; 'Name': 'Francis Ford Coppola 1',

\u0026 nbsp; \u0026 nbsp; 'Sex': 1

'

The method creates a customer index and puts a data into it, tbCustomer is type

curl -XPUT 'http://192.168.225.131:9200/dept/employee/32' -d '' empname ':' emp32 ''

curl -XPUT 'http://192.168.225.131:9200/dept/employee/31' -d '' empname ':' emp31 ''

The method also creates a dept index and puts a data into it, employee is type

Create a river and index the variable template as follows:

$ curl -XPUT 'localhost: 9200 / _river / $ es.river.name / _ meta' -d '



\u0026 nbsp; 'type': 'mongodb',

\u0026 nbsp; 'mongodb': \u0026 nbsp;

\u0026 nbsp; \u0026 nbsp; 'servers':

\u0026 nbsp; \u0026 nbsp; [

\u0026 nbsp; \u0026 nbsp; \u0026 nbsp; 'host': $ mongo.instance1.host, 'port': $ mongo.instance1.port,

\u0026 nbsp; \u0026 nbsp; \u0026 nbsp; 'host': $ mongo.instance2.host, 'port': $ mongo.instance2.port

\u0026 nbsp; \u0026 nbsp;],

\u0026 nbsp; \u0026 nbsp; 'options': \u0026 nbsp;

\u0026 nbsp; \u0026 nbsp; \u0026 nbsp; 'secondary_read_preference': true, \u0026 nbsp;

\u0026 nbsp; \u0026 nbsp; \u0026 nbsp; 'drop_collection': $ mongo.drop.collection, \u0026 nbsp;

\u0026 nbsp; \u0026 nbsp; \u0026 nbsp; 'exclude_fields': $ mongo.exclude.fields,

\u0026 nbsp; \u0026 nbsp; \u0026 nbsp; 'include_fields': $ mongo.include.fields,

\u0026 nbsp; \u0026 nbsp; \u0026 nbsp; 'include_collection': $ mongo.include.collection,

\u0026 nbsp; \u0026 nbsp; \u0026 nbsp; 'import_all_collections': $ mongo.import.all.collections,

\u0026 nbsp; \u0026 nbsp; \u0026 nbsp; 'initial_timestamp':

\u0026 nbsp; \u0026 nbsp; \u0026 nbsp; \u0026 nbsp; \u0026 quot; script_type \u0026 quot;: $ mongo.initial.timestamp.script.type

\u0026 nbsp; \u0026 nbsp; \u0026 nbsp; \u0026 nbsp; 'script': $ mongo.initial.timestamp.script

\u0026 nbsp; \u0026 nbsp; \u0026 nbsp;,

\u0026 nbsp; \u0026 nbsp; \u0026 nbsp; 'skip_initial_import': $ mongo.skip.initial.import,

\u0026 nbsp; \u0026 nbsp; \u0026 nbsp; 'store_statistics': $ mongo.store.statistics,

\u0026 nbsp; \u0026 nbsp;,

\u0026 nbsp; \u0026 nbsp; 'credentials':

\u0026 nbsp; \u0026 nbsp; [

\u0026 quot; $ \u0026 quot; \u0026 quot;

\u0026 nbsp; \u0026 nbsp; \u0026 nbsp; 'db': 'admin', 'user': $ mongo.db.user, 'password': $ mongo.db.password

\u0026 nbsp; \u0026 nbsp;],

\u0026 nbsp; \u0026 nbsp; 'db': $ mongo.db.name, \u0026 nbsp;

\u0026 nbsp; \u0026 nbsp; 'collection': $ mongo.collection.name, \u0026 nbsp;

\u0026 nbsp; \u0026 nbsp; 'gridfs': $ mongo.is.gridfs.collection,

\u0026 nbsp; \u0026 nbsp; 'filter': $ mongo.filter

\u0026 nbsp;, \u0026 nbsp;

\u0026 nbsp; 'index': \u0026 nbsp;

\u0026 nbsp; \u0026 nbsp; 'name': $ es.index.name, \u0026 nbsp;

\u0026 nbsp; \u0026 nbsp; 'throttle_size': $ es.throttle.size,

real time file synchronization windows 7 \u0026 nbsp; \u0026 nbsp; 'bulk_size': $ es.bulk.size,

\u0026 nbsp; \u0026 nbsp; 'type': $ es.type.name

\u0026 nbsp; \u0026 nbsp; 'bulk':

\u0026 nbsp; \u0026 nbsp; \u0026 nbsp; 'actions': $ es.bulk.actions

\u0026 nbsp; \u0026 nbsp; \u0026 nbsp; 'size': $ es.bulk.size,

\u0026 nbsp; \u0026 nbsp; \u0026 nbsp; 'concurrent_requests': $ es.bulk.concurrent.requests,

\u0026 nbsp; \u0026 nbsp; \u0026 nbsp; 'flush_interval': $ es.bulk.flush.interval

\u0026 nbsp; \u0026 nbsp;

\u0026 nbsp;

'

--template end--

--url--

This plugin git address: https: //github.com/laigood/elasticsearch-river-mongodb

Articles

1 2