When we post data that the field exceed the limit, elasticsearch will reject the data which error:
{"error":{"root_cause":[{"type":"remote_transport_exception","reason":"[cs19-2][10.200.20.39:9301][indices:data/write/index]"}],"type":"illegal_argument_exception","reason":"Document contains at least one immense term in field=\"field1\" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped. Please correct the analyzer to not produce such terms. The prefix of the first immense term is
{"error":{"root_cause":[{"type":"remote_transport_exception","reason":"[cs19-2][10.200.20.39:9301][indices:data/write/index]"}],"type":"illegal_argument_exception","reason":"Document contains at least one immense term in field=\"field1\" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped. Please correct the analyzer to not produce such terms. The prefix of the first immense term is
You can handle this:
1. you can udpate the mapping part at any time
PUT INDEXNAME/_mapping/TYPENAME { "properties": { "logInfo": { "type": "string", "analyzer": "keyword", "ignore_above":32766 } } }
ignore_above means that we will only keep 32766 bytes
2. surely you have to update your index template to tell the new index to use this config.
Thanks.
Thanks.
Comments
Post a Comment