@datafire/azure_search_searchservice v4.0.0
@datafire/azure_search_searchservice
Client library for SearchServiceClient
Installation and Usage
npm install --save @datafire/azure_search_searchservice
let azure_search_searchservice = require('@datafire/azure_search_searchservice').create();
.then(data => {
console.log(data);
});
Description
Client that can be used to manage and query indexes and documents, as well as manage other resources, on a search service.
Actions
DataSources_List
Lists all datasources available for a search service.
azure_search_searchservice.DataSources_List({
"api-version": ""
}, context)
Input
- input
object
- $select
string
: Selects which top-level properties of the data sources to retrieve. Specified as a comma-separated list of JSON property names, or '*' for all properties. The default is all properties. - client-request-id
string
: The tracking ID sent with the request to help with debugging. - api-version required
string
: Client Api Version.
- $select
Output
- output DataSourceListResult
DataSources_Create
Creates a new datasource.
azure_search_searchservice.DataSources_Create({
"dataSource": null,
"api-version": ""
}, context)
Input
- input
object
- dataSource required DataSource
- client-request-id
string
: The tracking ID sent with the request to help with debugging. - api-version required
string
: Client Api Version.
Output
- output DataSource
DataSources_Delete
Deletes a datasource.
azure_search_searchservice.DataSources_Delete({
"dataSourceName": "",
"api-version": ""
}, context)
Input
- input
object
- dataSourceName required
string
: The name of the datasource to delete. - client-request-id
string
: The tracking ID sent with the request to help with debugging. - If-Match
string
: Defines the If-Match condition. The operation will be performed only if the ETag on the server matches this value. - If-None-Match
string
: Defines the If-None-Match condition. The operation will be performed only if the ETag on the server does not match this value. - api-version required
string
: Client Api Version.
- dataSourceName required
Output
Output schema unknown
DataSources_Get
Retrieves a datasource definition.
azure_search_searchservice.DataSources_Get({
"dataSourceName": "",
"api-version": ""
}, context)
Input
- input
object
- dataSourceName required
string
: The name of the datasource to retrieve. - client-request-id
string
: The tracking ID sent with the request to help with debugging. - api-version required
string
: Client Api Version.
- dataSourceName required
Output
- output DataSource
DataSources_CreateOrUpdate
Creates a new datasource or updates a datasource if it already exists.
azure_search_searchservice.DataSources_CreateOrUpdate({
"dataSourceName": "",
"dataSource": null,
"Prefer": "",
"api-version": ""
}, context)
Input
- input
object
- dataSourceName required
string
: The name of the datasource to create or update. - dataSource required DataSource
- client-request-id
string
: The tracking ID sent with the request to help with debugging. - If-Match
string
: Defines the If-Match condition. The operation will be performed only if the ETag on the server matches this value. - If-None-Match
string
: Defines the If-None-Match condition. The operation will be performed only if the ETag on the server does not match this value. - Prefer required
string
(values: return=representation): For HTTP PUT requests, instructs the service to return the created/updated resource on success. - api-version required
string
: Client Api Version.
- dataSourceName required
Output
- output DataSource
Indexers_List
Lists all indexers available for a search service.
azure_search_searchservice.Indexers_List({
"api-version": ""
}, context)
Input
- input
object
- $select
string
: Selects which top-level properties of the indexers to retrieve. Specified as a comma-separated list of JSON property names, or '*' for all properties. The default is all properties. - client-request-id
string
: The tracking ID sent with the request to help with debugging. - api-version required
string
: Client Api Version.
- $select
Output
- output IndexerListResult
Indexers_Create
Creates a new indexer.
azure_search_searchservice.Indexers_Create({
"indexer": null,
"api-version": ""
}, context)
Input
- input
object
- indexer required Indexer
- client-request-id
string
: The tracking ID sent with the request to help with debugging. - api-version required
string
: Client Api Version.
Output
- output Indexer
Indexers_Delete
Deletes an indexer.
azure_search_searchservice.Indexers_Delete({
"indexerName": "",
"api-version": ""
}, context)
Input
- input
object
- indexerName required
string
: The name of the indexer to delete. - client-request-id
string
: The tracking ID sent with the request to help with debugging. - If-Match
string
: Defines the If-Match condition. The operation will be performed only if the ETag on the server matches this value. - If-None-Match
string
: Defines the If-None-Match condition. The operation will be performed only if the ETag on the server does not match this value. - api-version required
string
: Client Api Version.
- indexerName required
Output
Output schema unknown
Indexers_Get
Retrieves an indexer definition.
azure_search_searchservice.Indexers_Get({
"indexerName": "",
"api-version": ""
}, context)
Input
- input
object
- indexerName required
string
: The name of the indexer to retrieve. - client-request-id
string
: The tracking ID sent with the request to help with debugging. - api-version required
string
: Client Api Version.
- indexerName required
Output
- output Indexer
Indexers_CreateOrUpdate
Creates a new indexer or updates an indexer if it already exists.
azure_search_searchservice.Indexers_CreateOrUpdate({
"indexerName": "",
"indexer": null,
"Prefer": "",
"api-version": ""
}, context)
Input
- input
object
- indexerName required
string
: The name of the indexer to create or update. - indexer required Indexer
- client-request-id
string
: The tracking ID sent with the request to help with debugging. - If-Match
string
: Defines the If-Match condition. The operation will be performed only if the ETag on the server matches this value. - If-None-Match
string
: Defines the If-None-Match condition. The operation will be performed only if the ETag on the server does not match this value. - Prefer required
string
(values: return=representation): For HTTP PUT requests, instructs the service to return the created/updated resource on success. - api-version required
string
: Client Api Version.
- indexerName required
Output
- output Indexer
Indexers_Reset
Resets the change tracking state associated with an indexer.
azure_search_searchservice.Indexers_Reset({
"indexerName": "",
"api-version": ""
}, context)
Input
- input
object
- indexerName required
string
: The name of the indexer to reset. - client-request-id
string
: The tracking ID sent with the request to help with debugging. - api-version required
string
: Client Api Version.
- indexerName required
Output
Output schema unknown
Indexers_Run
Runs an indexer on-demand.
azure_search_searchservice.Indexers_Run({
"indexerName": "",
"api-version": ""
}, context)
Input
- input
object
- indexerName required
string
: The name of the indexer to run. - client-request-id
string
: The tracking ID sent with the request to help with debugging. - api-version required
string
: Client Api Version.
- indexerName required
Output
Output schema unknown
Indexers_GetStatus
Returns the current status and execution history of an indexer.
azure_search_searchservice.Indexers_GetStatus({
"indexerName": "",
"api-version": ""
}, context)
Input
- input
object
- indexerName required
string
: The name of the indexer for which to retrieve status. - client-request-id
string
: The tracking ID sent with the request to help with debugging. - api-version required
string
: Client Api Version.
- indexerName required
Output
- output IndexerExecutionInfo
Indexes_List
Lists all indexes available for a search service.
azure_search_searchservice.Indexes_List({
"api-version": ""
}, context)
Input
- input
object
- $select
string
: Selects which top-level properties of the index definitions to retrieve. Specified as a comma-separated list of JSON property names, or '*' for all properties. The default is all properties. - client-request-id
string
: The tracking ID sent with the request to help with debugging. - api-version required
string
: Client Api Version.
- $select
Output
- output IndexListResult
Indexes_Create
Creates a new search index.
azure_search_searchservice.Indexes_Create({
"index": null,
"api-version": ""
}, context)
Input
- input
object
- index required Index
- client-request-id
string
: The tracking ID sent with the request to help with debugging. - api-version required
string
: Client Api Version.
Output
- output Index
Indexes_Delete
Deletes a search index and all the documents it contains.
azure_search_searchservice.Indexes_Delete({
"indexName": "",
"api-version": ""
}, context)
Input
- input
object
- indexName required
string
: The name of the index to delete. - client-request-id
string
: The tracking ID sent with the request to help with debugging. - If-Match
string
: Defines the If-Match condition. The operation will be performed only if the ETag on the server matches this value. - If-None-Match
string
: Defines the If-None-Match condition. The operation will be performed only if the ETag on the server does not match this value. - api-version required
string
: Client Api Version.
- indexName required
Output
Output schema unknown
Indexes_Get
Retrieves an index definition.
azure_search_searchservice.Indexes_Get({
"indexName": "",
"api-version": ""
}, context)
Input
- input
object
- indexName required
string
: The name of the index to retrieve. - client-request-id
string
: The tracking ID sent with the request to help with debugging. - api-version required
string
: Client Api Version.
- indexName required
Output
- output Index
Indexes_CreateOrUpdate
Creates a new search index or updates an index if it already exists.
azure_search_searchservice.Indexes_CreateOrUpdate({
"indexName": "",
"index": null,
"Prefer": "",
"api-version": ""
}, context)
Input
- input
object
- indexName required
string
: The definition of the index to create or update. - index required Index
- allowIndexDowntime
boolean
: Allows new analyzers, tokenizers, token filters, or char filters to be added to an index by taking the index offline for at least a few seconds. This temporarily causes indexing and query requests to fail. Performance and write availability of the index can be impaired for several minutes after the index is updated, or longer for very large indexes. - client-request-id
string
: The tracking ID sent with the request to help with debugging. - If-Match
string
: Defines the If-Match condition. The operation will be performed only if the ETag on the server matches this value. - If-None-Match
string
: Defines the If-None-Match condition. The operation will be performed only if the ETag on the server does not match this value. - Prefer required
string
(values: return=representation): For HTTP PUT requests, instructs the service to return the created/updated resource on success. - api-version required
string
: Client Api Version.
- indexName required
Output
- output Index
Indexes_Analyze
Shows how an analyzer breaks text into tokens.
azure_search_searchservice.Indexes_Analyze({
"indexName": "",
"request": null,
"api-version": ""
}, context)
Input
- input
object
- indexName required
string
: The name of the index for which to test an analyzer. - request required AnalyzeRequest
- client-request-id
string
: The tracking ID sent with the request to help with debugging. - api-version required
string
: Client Api Version.
- indexName required
Output
- output AnalyzeResult
Indexes_GetStatistics
Returns statistics for the given index, including a document count and storage usage.
azure_search_searchservice.Indexes_GetStatistics({
"indexName": "",
"api-version": ""
}, context)
Input
- input
object
- indexName required
string
: The name of the index for which to retrieve statistics. - client-request-id
string
: The tracking ID sent with the request to help with debugging. - api-version required
string
: Client Api Version.
- indexName required
Output
- output IndexGetStatisticsResult
GetServiceStatistics
Gets service level statistics for a search service.
azure_search_searchservice.GetServiceStatistics({
"api-version": ""
}, context)
Input
- input
object
- client-request-id
string
: The tracking ID sent with the request to help with debugging. - api-version required
string
: Client Api Version.
- client-request-id
Output
- output ServiceStatistics
Skillsets_List
List all skillsets in a search service.
azure_search_searchservice.Skillsets_List({
"api-version": ""
}, context)
Input
- input
object
- $select
string
: Selects which top-level properties of the skillsets to retrieve. Specified as a comma-separated list of JSON property names, or '*' for all properties. The default is all properties. - client-request-id
string
: The tracking ID sent with the request to help with debugging. - api-version required
string
: Client Api Version.
- $select
Output
- output SkillsetListResult
Skillsets_Create
Creates a new skillset in a search service.
azure_search_searchservice.Skillsets_Create({
"skillset": null,
"api-version": ""
}, context)
Input
- input
object
- skillset required Skillset
- client-request-id
string
: The tracking ID sent with the request to help with debugging. - api-version required
string
: Client Api Version.
Output
- output Skillset
Skillsets_Delete
Deletes a skillset in a search service.
azure_search_searchservice.Skillsets_Delete({
"skillsetName": "",
"api-version": ""
}, context)
Input
- input
object
- skillsetName required
string
: The name of the skillset to delete. - client-request-id
string
: The tracking ID sent with the request to help with debugging. - If-Match
string
: Defines the If-Match condition. The operation will be performed only if the ETag on the server matches this value. - If-None-Match
string
: Defines the If-None-Match condition. The operation will be performed only if the ETag on the server does not match this value. - api-version required
string
: Client Api Version.
- skillsetName required
Output
Output schema unknown
Skillsets_Get
Retrieves a skillset in a search service.
azure_search_searchservice.Skillsets_Get({
"skillsetName": "",
"api-version": ""
}, context)
Input
- input
object
- skillsetName required
string
: The name of the skillset to retrieve. - client-request-id
string
: The tracking ID sent with the request to help with debugging. - api-version required
string
: Client Api Version.
- skillsetName required
Output
- output Skillset
Skillsets_CreateOrUpdate
Creates a new skillset in a search service or updates the skillset if it already exists.
azure_search_searchservice.Skillsets_CreateOrUpdate({
"skillsetName": "",
"skillset": null,
"Prefer": "",
"api-version": ""
}, context)
Input
- input
object
- skillsetName required
string
: The name of the skillset to create or update. - skillset required Skillset
- client-request-id
string
: The tracking ID sent with the request to help with debugging. - If-Match
string
: Defines the If-Match condition. The operation will be performed only if the ETag on the server matches this value. - If-None-Match
string
: Defines the If-None-Match condition. The operation will be performed only if the ETag on the server does not match this value. - Prefer required
string
(values: return=representation): For HTTP PUT requests, instructs the service to return the created/updated resource on success. - api-version required
string
: Client Api Version.
- skillsetName required
Output
- output Skillset
SynonymMaps_List
Lists all synonym maps available for a search service.
azure_search_searchservice.SynonymMaps_List({
"api-version": ""
}, context)
Input
- input
object
- $select
string
: Selects which top-level properties of the synonym maps to retrieve. Specified as a comma-separated list of JSON property names, or '*' for all properties. The default is all properties. - client-request-id
string
: The tracking ID sent with the request to help with debugging. - api-version required
string
: Client Api Version.
- $select
Output
- output SynonymMapListResult
SynonymMaps_Create
Creates a new synonym map.
azure_search_searchservice.SynonymMaps_Create({
"synonymMap": null,
"api-version": ""
}, context)
Input
- input
object
- synonymMap required SynonymMap
- client-request-id
string
: The tracking ID sent with the request to help with debugging. - api-version required
string
: Client Api Version.
Output
- output SynonymMap
SynonymMaps_Delete
Deletes a synonym map.
azure_search_searchservice.SynonymMaps_Delete({
"synonymMapName": "",
"api-version": ""
}, context)
Input
- input
object
- synonymMapName required
string
: The name of the synonym map to delete. - client-request-id
string
: The tracking ID sent with the request to help with debugging. - If-Match
string
: Defines the If-Match condition. The operation will be performed only if the ETag on the server matches this value. - If-None-Match
string
: Defines the If-None-Match condition. The operation will be performed only if the ETag on the server does not match this value. - api-version required
string
: Client Api Version.
- synonymMapName required
Output
Output schema unknown
SynonymMaps_Get
Retrieves a synonym map definition.
azure_search_searchservice.SynonymMaps_Get({
"synonymMapName": "",
"api-version": ""
}, context)
Input
- input
object
- synonymMapName required
string
: The name of the synonym map to retrieve. - client-request-id
string
: The tracking ID sent with the request to help with debugging. - api-version required
string
: Client Api Version.
- synonymMapName required
Output
- output SynonymMap
SynonymMaps_CreateOrUpdate
Creates a new synonym map or updates a synonym map if it already exists.
azure_search_searchservice.SynonymMaps_CreateOrUpdate({
"synonymMapName": "",
"synonymMap": null,
"Prefer": "",
"api-version": ""
}, context)
Input
- input
object
- synonymMapName required
string
: The name of the synonym map to create or update. - synonymMap required SynonymMap
- client-request-id
string
: The tracking ID sent with the request to help with debugging. - If-Match
string
: Defines the If-Match condition. The operation will be performed only if the ETag on the server matches this value. - If-None-Match
string
: Defines the If-None-Match condition. The operation will be performed only if the ETag on the server does not match this value. - Prefer required
string
(values: return=representation): For HTTP PUT requests, instructs the service to return the created/updated resource on success. - api-version required
string
: Client Api Version.
- synonymMapName required
Output
- output SynonymMap
Definitions
AnalyzeRequest
- AnalyzeRequest
object
: Specifies some text and analysis components used to break that text into tokens.- analyzer AnalyzerName
- charFilters
array
: An optional list of character filters to use when breaking the given text. This parameter can only be set when using the tokenizer parameter.- items CharFilterName
- text required
string
: The text to break into tokens. - tokenFilters
array
: An optional list of token filters to use when breaking the given text. This parameter can only be set when using the tokenizer parameter.- items TokenFilterName
- tokenizer TokenizerName
AnalyzeResult
- AnalyzeResult
object
: The result of testing an analyzer on text.- tokens
array
: The list of tokens returned by the analyzer specified in the request.- items TokenInfo
- tokens
Analyzer
- Analyzer
object
: Abstract base class for analyzers.- @odata.type required
string
- name required
string
: The name of the analyzer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
- @odata.type required
AnalyzerName
- AnalyzerName
string
(values: ar.microsoft, ar.lucene, hy.lucene, bn.microsoft, eu.lucene, bg.microsoft, bg.lucene, ca.microsoft, ca.lucene, zh-Hans.microsoft, zh-Hans.lucene, zh-Hant.microsoft, zh-Hant.lucene, hr.microsoft, cs.microsoft, cs.lucene, da.microsoft, da.lucene, nl.microsoft, nl.lucene, en.microsoft, en.lucene, et.microsoft, fi.microsoft, fi.lucene, fr.microsoft, fr.lucene, gl.lucene, de.microsoft, de.lucene, el.microsoft, el.lucene, gu.microsoft, he.microsoft, hi.microsoft, hi.lucene, hu.microsoft, hu.lucene, is.microsoft, id.microsoft, id.lucene, ga.lucene, it.microsoft, it.lucene, ja.microsoft, ja.lucene, kn.microsoft, ko.microsoft, ko.lucene, lv.microsoft, lv.lucene, lt.microsoft, ml.microsoft, ms.microsoft, mr.microsoft, nb.microsoft, no.lucene, fa.lucene, pl.microsoft, pl.lucene, pt-BR.microsoft, pt-BR.lucene, pt-PT.microsoft, pt-PT.lucene, pa.microsoft, ro.microsoft, ro.lucene, ru.microsoft, ru.lucene, sr-cyrillic.microsoft, sr-latin.microsoft, sk.microsoft, sl.microsoft, es.microsoft, es.lucene, sv.microsoft, sv.lucene, ta.microsoft, te.microsoft, th.microsoft, th.lucene, tr.microsoft, tr.lucene, uk.microsoft, ur.microsoft, vi.microsoft, standard.lucene, standardasciifolding.lucene, keyword, pattern, simple, stop, whitespace): Defines the names of all text analyzers supported by Azure Cognitive Search.
AsciiFoldingTokenFilter
- AsciiFoldingTokenFilter
object
: Converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if such equivalents exist. This token filter is implemented using Apache Lucene.- preserveOriginal
boolean
: A value indicating whether the original token will be kept. Default is false. - @odata.type required
string
- name required
string
: The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
- preserveOriginal
AzureActiveDirectoryApplicationCredentials
- AzureActiveDirectoryApplicationCredentials
object
: Credentials of a registered application created for your search service, used for authenticated access to the encryption keys stored in Azure Key Vault.- applicationId required
string
: An AAD Application ID that was granted the required access permissions to the Azure Key Vault that is to be used when encrypting your data at rest. The Application ID should not be confused with the Object ID for your AAD Application. - applicationSecret
string
: The authentication key of the specified AAD application.
- applicationId required
CharFilter
- CharFilter
object
: Abstract base class for character filters.- @odata.type required
string
- name required
string
: The name of the char filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
- @odata.type required
CharFilterName
- CharFilterName
string
(values: html_strip): Defines the names of all character filters supported by Azure Cognitive Search.
CjkBigramTokenFilter
- CjkBigramTokenFilter
object
: Forms bigrams of CJK terms that are generated from StandardTokenizer. This token filter is implemented using Apache Lucene.- ignoreScripts
array
: The scripts to ignore. - outputUnigrams
boolean
: A value indicating whether to output both unigrams and bigrams (if true), or just bigrams (if false). Default is false. - @odata.type required
string
- name required
string
: The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
- ignoreScripts
CjkBigramTokenFilterScripts
- CjkBigramTokenFilterScripts
string
(values: han, hiragana, katakana, hangul): Scripts that can be ignored by CjkBigramTokenFilter.
ClassicTokenizer
- ClassicTokenizer
object
: Grammar-based tokenizer that is suitable for processing most European-language documents. This tokenizer is implemented using Apache Lucene.- maxTokenLength
integer
: The maximum token length. Default is 255. Tokens longer than the maximum length are split. The maximum token length that can be used is 300 characters. - @odata.type required
string
- name required
string
: The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
- maxTokenLength
CognitiveServices
- CognitiveServices
object
: Abstract base class for describing any cognitive service resource attached to the skillset.- @odata.type required
string
- description
string
- @odata.type required
CognitiveServicesByKey
- CognitiveServicesByKey
object
: A cognitive service resource provisioned with a key that is attached to a skillset.- key required
string
- @odata.type required
string
- description
string
- key required
CommonGramTokenFilter
- CommonGramTokenFilter
object
: Construct bigrams for frequently occurring terms while indexing. Single terms are still indexed too, with bigrams overlaid. This token filter is implemented using Apache Lucene.- commonWords required
array
: The set of common words.- items
string
- items
- ignoreCase
boolean
: A value indicating whether common words matching will be case insensitive. Default is false. - queryMode
boolean
: A value that indicates whether the token filter is in query mode. When in query mode, the token filter generates bigrams and then removes common words and single terms followed by a common word. Default is false. - @odata.type required
string
- name required
string
: The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
- commonWords required
ConditionalSkill
- ConditionalSkill: A skill that enables scenarios that require a Boolean operation to determine the data to assign to an output.
- @odata.type required
string
- context
string
: Represents the level at which operations take place, such as the document root or document content (for example, /document or /document/content). The default is /document. - description
string
: The description of the skill which describes the inputs, outputs, and usage of the skill. - inputs required
array
: Inputs of the skills could be a column in the source data set, or the output of an upstream skill.- items InputFieldMappingEntry
- name
string
: The name of the skill which uniquely identifies it within the skillset. A skill with no name defined will be given a default name of its 1-based index in the skills array, prefixed with the character '#'. - outputs required
array
: The output of a skill is either a field in a search index, or a value that can be consumed as an input by another skill.- items OutputFieldMappingEntry
- @odata.type required
CorsOptions
- CorsOptions
object
: Defines options to control Cross-Origin Resource Sharing (CORS) for an index.- allowedOrigins required
array
: The list of origins from which JavaScript code will be granted access to your index. Can contain a list of hosts of the form {protocol}://{fully-qualified-domain-name}:{port#}, or a single '*' to allow all origins (not recommended).- items
string
- items
- maxAgeInSeconds
integer
: The duration for which browsers should cache CORS preflight responses. Defaults to 5 minutes.
- allowedOrigins required
CustomAnalyzer
- CustomAnalyzer
object
: Allows you to take control over the process of converting text into indexable/searchable tokens. It's a user-defined configuration consisting of a single predefined tokenizer and one or more filters. The tokenizer is responsible for breaking text into tokens, and the filters for modifying tokens emitted by the tokenizer.- charFilters
array
: A list of character filters used to prepare input text before it is processed by the tokenizer. For instance, they can replace certain characters or symbols. The filters are run in the order in which they are listed.- items CharFilterName
- tokenFilters
array
: A list of token filters used to filter out or modify the tokens generated by a tokenizer. For example, you can specify a lowercase filter that converts all characters to lowercase. The filters are run in the order in which they are listed.- items TokenFilterName
- tokenizer required TokenizerName
- @odata.type required
string
- name required
string
: The name of the analyzer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
- charFilters
DataChangeDetectionPolicy
- DataChangeDetectionPolicy
object
: Abstract base class for data change detection policies.- @odata.type required
string
- @odata.type required
DataContainer
- DataContainer
object
: Represents information about the entity (such as Azure SQL table or CosmosDB collection) that will be indexed.- name required
string
: The name of the table or view (for Azure SQL data source) or collection (for CosmosDB data source) that will be indexed. - query
string
: A query that is applied to this data container. The syntax and meaning of this parameter is datasource-specific. Not supported by Azure SQL datasources.
- name required
DataDeletionDetectionPolicy
- DataDeletionDetectionPolicy
object
: Abstract base class for data deletion detection policies.- @odata.type required
string
- @odata.type required
DataSource
- DataSource
object
: Represents a datasource definition, which can be used to configure an indexer.- @odata.etag
string
: The ETag of the DataSource. - container required DataContainer
- credentials required DataSourceCredentials
- dataChangeDetectionPolicy DataChangeDetectionPolicy
- dataDeletionDetectionPolicy DataDeletionDetectionPolicy
- description
string
: The description of the datasource. - name required
string
: The name of the datasource. - type required DataSourceType
- @odata.etag
DataSourceCredentials
- DataSourceCredentials
object
: Represents credentials that can be used to connect to a datasource.- connectionString
string
: The connection string for the datasource.
- connectionString
DataSourceListResult
- DataSourceListResult
object
: Response from a List Datasources request. If successful, it includes the full definitions of all datasources.- value
array
: The datasources in the Search service.- items DataSource
- value
DataSourceType
- DataSourceType
string
(values: azuresql, cosmosdb, azureblob, azuretable, mysql): Defines the type of a datasource.
DataType
- DataType
string
(values: Edm.String, Edm.Int32, Edm.Int64, Edm.Double, Edm.Boolean, Edm.DateTimeOffset, Edm.GeographyPoint, Edm.ComplexType): Defines the data type of a field in a search index.
DefaultCognitiveServices
- DefaultCognitiveServices: An empty object that represents the default cognitive service resource for a skillset.
- @odata.type required
string
- description
string
- @odata.type required
DictionaryDecompounderTokenFilter
- DictionaryDecompounderTokenFilter
object
: Decomposes compound words found in many Germanic languages. This token filter is implemented using Apache Lucene.- maxSubwordSize
integer
: The maximum subword size. Only subwords shorter than this are outputted. Default is 15. Maximum is 300. - minSubwordSize
integer
: The minimum subword size. Only subwords longer than this are outputted. Default is 2. Maximum is 300. - minWordSize
integer
: The minimum word size. Only words longer than this get processed. Default is 5. Maximum is 300. - onlyLongestMatch
boolean
: A value indicating whether to add only the longest matching subword to the output. Default is false. - wordList required
array
: The list of words to match against.- items
string
- items
- @odata.type required
string
- name required
string
: The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
- maxSubwordSize
DistanceScoringFunction
- DistanceScoringFunction
object
: Defines a function that boosts scores based on distance from a geographic location.- distance required DistanceScoringParameters
- boost required
number
: A multiplier for the raw score. Must be a positive number not equal to 1.0. - fieldName required
string
: The name of the field used as input to the scoring function. - interpolation ScoringFunctionInterpolation
- type required
string
DistanceScoringParameters
- DistanceScoringParameters
object
: Provides parameter values to a distance scoring function.- boostingDistance required
number
: The distance in kilometers from the reference location where the boosting range ends. - referencePointParameter required
string
: The name of the parameter passed in search queries to specify the reference location.
- boostingDistance required
EdgeNGramTokenFilter
- EdgeNGramTokenFilter
object
: Generates n-grams of the given size(s) starting from the front or the back of an input token. This token filter is implemented using Apache Lucene.- maxGram
integer
: The maximum n-gram length. Default is 2. - minGram
integer
: The minimum n-gram length. Default is 1. Must be less than the value of maxGram. - side EdgeNGramTokenFilterSide
- @odata.type required
string
- name required
string
: The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
- maxGram
EdgeNGramTokenFilterSide
- EdgeNGramTokenFilterSide
string
(values: front, back): Specifies which side of the input an n-gram should be generated from.
EdgeNGramTokenFilterV2
- EdgeNGramTokenFilterV2
object
: Generates n-grams of the given size(s) starting from the front or the back of an input token. This token filter is implemented using Apache Lucene.- maxGram
integer
: The maximum n-gram length. Default is 2. Maximum is 300. - minGram
integer
: The minimum n-gram length. Default is 1. Maximum is 300. Must be less than the value of maxGram. - side EdgeNGramTokenFilterSide
- @odata.type required
string
- name required
string
: The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
- maxGram
EdgeNGramTokenizer
- EdgeNGramTokenizer
object
: Tokenizes the input from an edge into n-grams of the given size(s). This tokenizer is implemented using Apache Lucene.- maxGram
integer
: The maximum n-gram length. Default is 2. Maximum is 300. - minGram
integer
: The minimum n-gram length. Default is 1. Maximum is 300. Must be less than the value of maxGram. - tokenChars
array
: Character classes to keep in the tokens.- items TokenCharacterKind
- @odata.type required
string
- name required
string
: The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
- maxGram
ElisionTokenFilter
- ElisionTokenFilter
object
: Removes elisions. For example, "l'avion" (the plane) will be converted to "avion" (plane). This token filter is implemented using Apache Lucene.- articles
array
: The set of articles to remove.- items
string
- items
- @odata.type required
string
- name required
string
: The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
- articles
EncryptionKey
- EncryptionKey
object
: A customer-managed encryption key in Azure Key Vault. Keys that you create and manage can be used to encrypt or decrypt data-at-rest in Azure Cognitive Search, such as indexes and synonym maps.- accessCredentials AzureActiveDirectoryApplicationCredentials
- keyVaultKeyName required
string
: The name of your Azure Key Vault key to be used to encrypt your data at rest. - keyVaultKeyVersion required
string
: The version of your Azure Key Vault key to be used to encrypt your data at rest. - keyVaultUri required
string
: The URI of your Azure Key Vault, also referred to as DNS name, that contains the key to be used to encrypt your data at rest. An example URI might be https://my-keyvault-name.vault.azure.net.
EntityCategory
- EntityCategory
string
(values: location, organization, person, quantity, datetime, url, email): A string indicating what entity categories to return.
EntityRecognitionSkill
- EntityRecognitionSkill
object
: Text analytics entity recognition.- categories
array
: A list of entity categories that should be extracted.- items EntityCategory
- defaultLanguageCode EntityRecognitionSkillLanguage
- includeTypelessEntities
boolean
: Determines whether or not to include entities which are well known but don't conform to a pre-defined type. If this configuration is not set (default), set to null or set to false, entities which don't conform to one of the pre-defined types will not be surfaced. - minimumPrecision
number
: A value between 0 and 1 that be used to only include entities whose confidence score is greater than the value specified. If not set (default), or if explicitly set to null, all entities will be included. - @odata.type required
string
- context
string
: Represents the level at which operations take place, such as the document root or document content (for example, /document or /document/content). The default is /document. - description
string
: The description of the skill which describes the inputs, outputs, and usage of the skill. - inputs required
array
: Inputs of the skills could be a column in the source data set, or the output of an upstream skill.- items InputFieldMappingEntry
- name
string
: The name of the skill which uniquely identifies it within the skillset. A skill with no name defined will be given a default name of its 1-based index in the skills array, prefixed with the character '#'. - outputs required
array
: The output of a skill is either a field in a search index, or a value that can be consumed as an input by another skill.- items OutputFieldMappingEntry
- categories
EntityRecognitionSkillLanguage
- EntityRecognitionSkillLanguage
string
(values: ar, cs, zh-Hans, zh-Hant, da, nl, en, fi, fr, de, el, hu, it, ja, ko, no, pl, pt-PT, pt-BR, ru, es, sv, tr): The language codes supported for input text by EntityRecognitionSkill.
Field
- Field
object
: Represents a field in an index definition, which describes the name, data type, and search behavior of a field.- analyzer AnalyzerName
- facetable
boolean
: A value indicating whether to enable the field to be referenced in facet queries. Typically used in a presentation of search results that includes hit count by category (for example, search for digital cameras and see hits by brand, by megapixels, by price, and so on). This property must be null for complex fields. Fields of type Edm.GeographyPoint or Collection(Edm.GeographyPoint) cannot be facetable. Default is true for all other simple fields. - fields
array
: A list of sub-fields if this is a field of type Edm.ComplexType or Collection(Edm.ComplexType). Must be null or empty for simple fields.- items Field
- filterable
boolean
: A value indicating whether to enable the field to be referenced in $filter queries. filterable differs from searchable in how strings are handled. Fields of type Edm.String or Collection(Edm.String) that are filterable do not undergo word-breaking, so comparisons are for exact matches only. For example, if you set such a field f to "sunny day", $filter=f eq 'sunny' will find no matches, but $filter=f eq 'sunny day' will. This property must be null for complex fields. Default is true for simple fields and null for complex fields. - indexAnalyzer AnalyzerName
- key
boolean
: A value indicating whether the field uniquely identifies documents in the index. Exactly one top-level field in each index must be chosen as the key field and it must be of type Edm.String. Key fields can be used to look up documents directly and update or delete specific documents. Default is false for simple fields and null for complex fields. - name required
string
: The name of the field, which must be unique within the fields collection of the index or parent field. - retrievable
boolean
: A value indicating whether the field can be returned in a search result. You can disable this option if you want to use a field (for example, margin) as a filter, sorting, or scoring mechanism but do not want the field to be visible to the end user. This property must be true for key fields, and it must be null for complex fields. This property can be changed on existing fields. Enabling this property does not cause any increase in index storage requirements. Default is true for simple fields and null for complex fields. - searchAnalyzer AnalyzerName
- searchable
boolean
: A value indicating whether the field is full-text searchable. This means it will undergo analysis such as word-breaking during indexing. If you set a searchable field to a value like "sunny day", internally it will be split into the individual tokens "sunny" and "day". This enables full-text searches for these terms. Fields of type Edm.String or Collection(Edm.String) are searchable by default. This property must be false for simple fields of other non-string data types, and it must be null for complex fields. Note: searchable fields consume extra space in your index since Azure Cognitive Search will store an additional tokenized version of the field value for full-text searches. If you want to save space in your index and you don't need a field to be included in searches, set searchable to false. - sortable
boolean
: A value indicating whether to enable the field to be referenced in $orderby expressions. By default Azure Cognitive Search sorts results by score, but in many experiences users will want to sort by fields in the documents. A simple field can be sortable only if it is single-valued (it has a single value in the scope of the parent document). Simple collection fields cannot be sortable, since they are multi-valued. Simple sub-fields of complex collections are also multi-valued, and therefore cannot be sortable. This is true whether it's an immediate parent field, or an ancestor field, that's the complex collection. Complex fields cannot be sortable and the sortable property must be null for such fields. The default for sortable is true for single-valued simple fields, false for multi-valued simple fields, and null for complex fields. - synonymMaps
array
: A list of the names of synonym maps to associate with this field. This option can be used only with searchable fields. Currently only one synonym map per field is supported. Assigning a synonym map to a field ensures that query terms targeting that field are expanded at query-time using the rules in the synonym map. This attribute can be changed on existing fields. Must be null or an empty collection for complex fields.- items
string
- items
- type required DataType
FieldMapping
- FieldMapping
object
: Defines a mapping between a field in a data source and a target field in an index.- mappingFunction FieldMappingFunction
- sourceFieldName required
string
: The name of the field in the data source. - targetFieldName
string
: The name of the target field in the index. Same as the source field name by default.
FieldMappingFunction
- FieldMappingFunction
object
: Represents a function that transforms a value from a data source before indexing.- name required
string
: The name of the field mapping function. - parameters
object
: A dictionary of parameter name/value pairs to pass to the function. Each value must be of a primitive type.
- name required
FreshnessScoringFunction
- FreshnessScoringFunction
object
: Defines a function that boosts scores based on the value of a date-time field.- freshness required FreshnessScoringParameters
- boost required
number
: A multiplier for the raw score. Must be a positive number not equal to 1.0. - fieldName required
string
: The name of the field used as input to the scoring function. - interpolation ScoringFunctionInterpolation
- type required
string
FreshnessScoringParameters
- FreshnessScoringParameters
object
: Provides parameter values to a freshness scoring function.- boostingDuration required
string
: The expiration period after which boosting will stop for a particular document.
- boostingDuration required
HighWaterMarkChangeDetectionPolicy
- HighWaterMarkChangeDetectionPolicy
object
: Defines a data change detection policy that captures changes based on the value of a high water mark column.- highWaterMarkColumnName required
string
: The name of the high water mark column. - @odata.type required
string
- highWaterMarkColumnName required
ImageAnalysisSkill
- ImageAnalysisSkill
object
: A skill that analyzes image files. It extracts a rich set of visual features based on the image content.- defaultLanguageCode ImageAnalysisSkillLanguage
- details
array
: A string indicating which domain-specific details to return.- items ImageDetail
- visualFeatures
array
: A list of visual features.- items VisualFeature
- @odata.type required
string
- context
string
: Represents the level at which operations take place, such as the document root or document content (for example, /document or /document/content). The default is /document. - description
string
: The description of the skill which describes the inputs, outputs, and usage of the skill. - inputs required
array
: Inputs of the skills could be a column in the source data set, or the output of an upstream skill.- items InputFieldMappingEntry
- name
string
: The name of the skill which uniquely identifies it within the skillset. A skill with no name defined will be given a default name of its 1-based index in the skills array, prefixed with the character '#'. - outputs required
array
: The output of a skill is either a field in a search index, or a value that can be consumed as an input by another skill.- items OutputFieldMappingEntry
ImageAnalysisSkillLanguage
- ImageAnalysisSkillLanguage
string
(values: en, zh): The language codes supported for input by ImageAnalysisSkill.
ImageDetail
- ImageDetail
string
(values: celebrities, landmarks): A string indicating which domain-specific details to return.
Index
- Index
object
: Represents a search index definition, which describes the fields and search behavior of an index.- @odata.etag
string
: The ETag of the index. - analyzers
array
: The analyzers for the index.- items Analyzer
- charFilters
array
: The character filters for the index.- items CharFilter
- corsOptions CorsOptions
- defaultScoringProfile
string
: The name of the scoring profile to use if none is specified in the query. If this property is not set and no scoring profile is specified in the query, then default scoring (tf-idf) will be used. - encryptionKey EncryptionKey
- fields required
array
: The fields of the index.- items Field
- name required
string
: The name of the index. - scoringProfiles
array
: The scoring profiles for the index.- items ScoringProfile
- suggesters
array
: The suggesters for the index.- items Suggester
- tokenFilters
array
: The token filters for the index.- items TokenFilter
- tokenizers
array
: The tokenizers for the index.- items Tokenizer
- @odata.etag
IndexGetStatisticsResult
- IndexGetStatisticsResult
object
: Statistics for a given index. Statistics are collected periodically and are not guaranteed to always be up-to-date.- documentCount
integer
: The number of documents in the index. - storageSize
integer
: The amount of storage in bytes consumed by the index.
- documentCount
IndexListResult
- IndexListResult
object
: Response from a List Indexes request. If successful, it includes the full definitions of all indexes.- value
array
: The indexes in the Search service.- items Index
- value
Indexer
- Indexer
object
: Represents an indexer.- @odata.etag
string
: The ETag of the Indexer. - dataSourceName required
string
: The name of the datasource from which this indexer reads data. - description
string
: The description of the indexer. - disabled
boolean
: A value indicating whether the indexer is disabled. Default is false. - fieldMappings
array
: Defines mappings between fields in the data source and corresponding target fields in the index.- items FieldMapping
- name required
string
: The name of the indexer. - outputFieldMappings
array
: Output field mappings are applied after enrichment and immediately before indexing.- items FieldMapping
- parameters IndexingParameters
- schedule IndexingSchedule
- skillsetName
string
: The name of the skillset executing with this indexer. - targetIndexName required
string
: The name of the index to which this indexer writes data.
- @odata.etag
IndexerExecutionInfo
- IndexerExecutionInfo
object
: Represents the current status and execution history of an indexer.- executionHistory
array
: History of the recent indexer executions, sorted in reverse chronological order.- items IndexerExecutionResult
- lastResult IndexerExecutionResult
- limits IndexerLimits
- status IndexerStatus
- executionHistory
IndexerExecutionResult
- IndexerExecutionResult
object
: Represents the result of an individual indexer execution.- endTime
string
: The end time of this indexer execution, if the execution has already completed. - errorMessage
string
: The error message indicating the top-level error, if any. - errors
array
: The item-level indexing errors.- items ItemError
- finalTrackingState
string
: Change tracking state with which an indexer execution finished. - initialTrackingState
string
: Change tracking state with which an indexer execution started. - itemsFailed
integer
: The number of items that failed to be indexed during this indexer execution. - itemsProcessed
integer
: The number of items that were processed during this indexer execution. This includes both successfully processed items and items where indexing was attempted but failed. - startTime
string
: The start time of this indexer execution. - status IndexerExecutionStatus
- warnings
array
: The item-level indexing warnings.- items ItemWarning
- endTime
IndexerExecutionStatus
- IndexerExecutionStatus
string
(values: transientFailure, success, inProgress, reset): Represents the status of an individual indexer execution.
IndexerLimits
- IndexerLimits
object
- maxDocumentContentCharactersToExtract
number
: The maximum number of characters that will be extracted from a document picked up for indexing. - maxDocumentExtractionSize
number
: The maximum size of a document, in bytes, which will be considered valid for indexing. - maxRunTime
string
: The maximum duration that the indexer is permitted to run for one execution.
- maxDocumentContentCharactersToExtract
IndexerListResult
- IndexerListResult
object
: Response from a List Indexers request. If successful, it includes the full definitions of all indexers.- value
array
: The indexers in the Search service.- items Indexer
- value
IndexerStatus
- IndexerStatus
string
(values: unknown, error, running): Represents the overall indexer status.
IndexingParameters
- IndexingParameters
object
: Represents parameters for indexer execution.- base64EncodeKeys
boolean
: Whether indexer will base64-encode all values that are inserted into key field of the target index. This is needed if keys can contain characters that are invalid in keys (such as dot '.'). Default is false. - batchSize
integer
: The number of items that are read from the data source and indexed as a single batch in order to improve performance. The default depends on the data source type. - configuration
object
: A dictionary of indexer-specific configuration properties. Each name is the name of a specific property. Each value must be of a primitive type. - maxFailedItems
integer
: The maximum number of items that can fail indexing for indexer execution to still be considered successful. -1 means no limit. Default is 0. - maxFailedItemsPerBatch
integer
: The maximum number of items in a single batch that can fail indexing for the batch to still be considered successful. -1 means no limit. Default is 0.
- base64EncodeKeys
IndexingSchedule
- IndexingSchedule
object
: Represents a schedule for indexer execution.- interval required
string
: The interval of time between indexer executions. - startTime
string
: The time when an indexer should start running.
- interval required
InputFieldMappingEntry
- InputFieldMappingEntry
object
: Input field mapping for a skill.- inputs
array
: The recursive inputs used when creating a complex type.- items InputFieldMappingEntry
- name required
string
: The name of the input. - source
string
: The source of the input. - sourceContext
string
: The source context used for selecting recursive inputs.
- inputs
ItemError
- ItemError
object
: Represents an item- or document-level indexing error.- details
string
: Additional, verbose details about the error to assist in debugging the indexer. This may not be always available. - documentationLink
string
: A link to a troubleshooting guide for these classes of errors. This may not be always available. - errorMessage
string
: The message describing the error that occurred while processing the item. - key
string
: The key of the item for which indexing failed. - name
string
: The name of the source at which the error originated. For example, this could refer to a particular skill in the attached skillset. This may not be always available. - statusCode
integer
: The status code indicating why the indexing operation failed. Possible values include: 400 for a malformed input document, 404 for document not found, 409 for a version conflict, 422 when the index is temporarily unavailable, or 503 for when the service is too busy.
- details
ItemWarning
- ItemWarning
object
: Represents an item-level warning.- details
string
: Additional, verbose details about the warning to assist in debugging the indexer. This may not be always available. - documentationLink
string
: A link to a troubleshooting guide for these classes of warnings. This may not be always available. - key
string
: The key of the item which generated a warning. - message
string
: The message describing the warning that occurred while processing the item. - name
string
: The name of the source at which the warning originated. For example, this could refer to a particular skill in the attached skillset. This may not be always available.
- details
KeepTokenFilter
- KeepTokenFilter
object
: A token filter that only keeps tokens with text contained in a specified list of words. This token filter is implemented using Apache Lucene.- keepWords required
array
: The list of words to keep.- items
string
- items
- keepWordsCase
boolean
: A value indicating whether to lower case all words first. Default is false. - @odata.type required
string
- name required
string
: The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
- keepWords required
KeyPhraseExtractionSkill
- KeyPhraseExtractionSkill
object
: A skill that uses text analytics for key phrase extraction.- defaultLanguageCode KeyPhraseExtractionSkillLanguage
- maxKeyPhraseCount
integer
: A number indicating how many key phrases to return. If absent, all identified key phrases will be returned. - @odata.type required
string
- context
string
: Represents the level at which operations take place, such as the document root or document content (for example, /document or /document/content). The default is /document. - description
string
: The description of the skill which describes the inputs, outputs, and usage of the skill. - inputs required
array
: Inputs of the skills could be a column in the source data set, or the output of an upstream skill.- items InputFieldMappingEntry
- name
string
: The name of the skill which uniquely identifies it within the skillset. A skill with no name defined will be given a default name of its 1-based index in the skills array, prefixed with the character '#'. - outputs required
array
: The output of a skill is either a field in a search index, or a value that can be consumed as an input by another skill.- items OutputFieldMappingEntry
KeyPhraseExtractionSkillLanguage
- KeyPhraseExtractionSkillLanguage
string
(values: da, nl, en, fi, fr, de, it, ja, ko, no, pl, pt-PT, pt-BR, ru, es, sv): The language codes supported for input text by KeyPhraseExtractionSkill.
KeywordMarkerTokenFilter
- KeywordMarkerTokenFilter
object
: Marks terms as keywords. This token filter is implemented using Apache Lucene.- ignoreCase
boolean
: A value indicating whether to ignore case. If true, all words are converted to lower case first. Default is false. - keywords required
array
: A list of words to mark as keywords.- items
string
- items
- @odata.type required
string
- name required
string
: The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
- ignoreCase
KeywordTokenizer
- KeywordTokenizer
object
: Emits the entire input as a single token. This tokenizer is implemented using Apache Lucene.- bufferSize
integer
: The read buffer size in bytes. Default is 256. - @odata.type required
string
- name required
string
: The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
- bufferSize
KeywordTokenizerV2
- KeywordTokenizerV2
object
: Emits the entire input as a single token. This tokenizer is implemented using Apache Lucene.- maxTokenLength
integer
: The maximum token length. Default is 256. Tokens longer than the maximum length are split. The maximum token length that can be used is 300 characters. - @odata.type required
string
- name required
string
: The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
- maxTokenLength
LanguageDetectionSkill
- LanguageDetectionSkill: A skill that detects the language of input text and reports a single language code for every document submitted on the request. The language code is paired with a score indicating the confidence of the analysis.
- @odata.type required
string
- context
string
: Represents the level at which operations take place, such as the document root or document content (for example, /document or /document/content). The default is /document. - description
string
: The description of the skill which describes the inputs, outputs, and usage of the skill. - inputs required
array
: Inputs of the skills could be a column in the source data set, or the output of an upstream skill.- items InputFieldMappingEntry
- name
string
: The name of the skill which uniquely identifies it within the skillset. A skill with no name defined will be given a default name of its 1-based index in the skills array, prefixed with the character '#'. - outputs required
array
: The output of a skill is either a field in a search index, or a value that can be consumed as an input by another skill.- items OutputFieldMappingEntry
- @odata.type required
LengthTokenFilter
- LengthTokenFilter
object
: Removes words that are too long or too short. This token filter is implemented using Apache Lucene.- max
integer
: The maximum length in characters. Default and maximum is 300. - min
integer
: The minimum length in characters. Default is 0. Maximum is 300. Must be less than the value of max. - @odata.type required
string
- name required
string
: The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
- max
LimitTokenFilter
- LimitTokenFilter
object
: Limits the number of tokens while indexing. This token filter is implemented using Apache Lucene.- consumeAllTokens
boolean
: A value indicating whether all tokens from the input must be consumed even if maxTokenCount is reached. Default is false. - maxTokenCount
integer
: The maximum number of tokens to produce. Default is 1. - @odata.type required
string
- name required
string
: The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
- consumeAllTokens
MagnitudeScoringFunction
- MagnitudeScoringFunction
object
: Defines a function that boosts scores based on the magnitude of a numeric field.- magnitude required MagnitudeScoringParameters
- boost required
number
: A multiplier for the raw score. Must be a positive number not equal to 1.0. - fieldName required
string
: The name of the field used as input to the scoring function. - interpolation ScoringFunctionInterpolation
- type required
string
MagnitudeScoringParameters
- MagnitudeSc