3atv精品不卡视频,97人人超碰国产精品最新,中文字幕av一区二区三区人妻少妇,久久久精品波多野结衣,日韩一区二区三区精品

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 人文社科 > 生活经验 >内容正文

生活经验

特征提取,转换和选择

發布時間:2023/11/28 生活经验 30 豆豆
生活随笔 收集整理的這篇文章主要介紹了 特征提取,转换和选择 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

特征提取,轉換和選擇
Extracting, transforming and selecting features
This section covers algorithms for working with features, roughly divided into these groups:
? Extraction: Extracting features from “raw” data
? Transformation: Scaling, converting, or modifying features
? Selection: Selecting a subset from a larger set of features
? Locality Sensitive Hashing (LSH): This class of algorithms combines aspects of feature transformation with other algorithms.
本節涵蓋使用功能的算法,大致分為以下幾類:
? 提取:從“原始”數據中提取特征
? 轉換:縮放,轉換或修改特征
? 選擇:從更大的特征集中選擇一個子集
? 局部敏感哈希(LSH):此類算法將特征轉換的各個方面與其它算法結合在一起。
Table of Contents
? Feature Extractors
o TF-IDF
o Word2Vec
o CountVectorizer
o FeatureHasher
? Feature Transformers
o Tokenizer
o StopWordsRemover
o nn-gram
o Binarizer
o PCA
o PolynomialExpansion
o Discrete Cosine Transform (DCT)
o StringIndexer
o IndexToString
o OneHotEncoder
o VectorIndexer
o Interaction
o Normalizer
o StandardScaler
o RobustScaler
o MinMaxScaler
o MaxAbsScaler
o Bucketizer
o ElementwiseProduct
o SQLTransformer
o VectorAssembler
o VectorSizeHint
o QuantileDiscretizer
o Imputer
? Feature Selectors
o VectorSlicer
o RFormula
o ChiSqSelector
o UnivariateFeatureSelector
o VarianceThresholdSelector
? Locality Sensitive Hashing
o LSH Operations
? Feature Transformation
? Approximate Similarity Join
? Approximate Nearest Neighbor Search
o LSH Algorithms
? Bucketed Random Projection for Euclidean Distance
? MinHash for Jaccard Distance
Feature Extractors
TF-IDF
Term frequency-inverse document frequency (TF-IDF) is a feature vectorization method widely used in text mining to reflect the importance of a term to a document in the corpus. Denote a term by t, a document by d, and the corpus by D. Term frequency TF(t,d) is the number of times that term t appears in document d, while document frequency DF(t,D) is the number of documents that contains term t. If we only use term frequency to measure the importance, it is very easy to over-emphasize terms that appear very often but carry little information about the document, e.g. “a”, “the”, and “of”. If a term appears very often across the corpus, it means it doesn’t carry special information about a particular document. Inverse document frequency is a numerical measure of how much information a term provides:
變量逆頻率文檔頻率(TF-IDF) 是一種特征向量化方法,廣泛用于文本挖掘中,反映變量對語料庫中文檔的重要性。用t表示變量,用d表示文檔,用D表示語料庫。變量頻率TF(t,d)是變量t在文檔d中出現的次數,而文檔頻率DF(t,D)是包含變量t的文檔數。如果僅使用變量頻率來衡量重要性,則過分強調那些經常出現,但幾乎不包含有關文檔信息的變量,例如“一個a”,“該the”和“屬于of”。如果變量經常出現在整個語料庫中,則表示該變量不包含有關特定文檔的特殊信息。逆文檔頻率是一個變量大小信息,提供了一個數值量度:

where |D| is the total number of documents in the corpus. Since logarithm is used, if a term appears in all documents, its IDF value becomes 0. Note that a smoothing term is applied to avoid dividing by zero for terms outside the corpus. The TF-IDF measure is simply the product of TF and IDF:
其中|D|是所述語料庫中的文件的總數。由于使用對數,因此如果一個變量出現在所有文檔中,則其IDF值將變為0。注意,應用了平滑變量以避免對主體外的變量除以零。TF-IDF度量只是TF和IDF的乘積:

There are several variants on the definition of term frequency and document frequency. In MLlib, we separate TF and IDF to make them flexible.
TF: Both HashingTF and CountVectorizer can be used to generate the term frequency vectors.
HashingTF is a Transformer which takes sets of terms and converts those sets into fixed-length feature vectors. In text processing, a “set of terms” might be a bag of words. HashingTF utilizes the hashing trick. A raw feature is mapped into an index (term) by applying a hash function. The hash function used here is MurmurHash 3. Then term frequencies are calculated based on the mapped indices. This approach avoids the need to compute a global term-to-index map, which can be expensive for a large corpus, but it suffers from potential hash collisions, where different raw features may become the same term after hashing. To reduce the chance of collision, we can increase the target feature dimension, i.e. the number of buckets of the hash table. Since a simple modulo on the hashed value is used to determine the vector index, it is advisable to use a power of two as the feature dimension, otherwise the features will not be mapped evenly to the vector indices. The default feature dimension is 218=262,144218=262,144. An optional binary toggle parameter controls term frequency counts. When set to true all nonzero frequency counts are set to 1. This is especially useful for discrete probabilistic models that model binary, rather than integer, counts.
CountVectorizer converts text documents to vectors of term counts. Refer to CountVectorizer for more details.
IDF: IDF is an Estimator which is fit on a dataset and produces an IDFModel. The IDFModel takes feature vectors (generally created from HashingTF or CountVectorizer) and scales each feature. Intuitively, it down-weights features which appear frequently in a corpus.
Note: spark.ml doesn’t provide tools for text segmentation. We refer users to the Stanford NLP Group and scalanlp/chalk.
Examples
In the following code segment, we start with a set of sentences. We split each sentence into words using Tokenizer. For each sentence (bag of words), we use HashingTF to hash the sentence into a feature vector. We use IDF to rescale the feature vectors; this generally improves performance when using text as features. Our feature vectors could then be passed to a learning algorithm.
變量頻率和文檔頻率的定義有多種變體。在MLlib中,將TF和IDF分開以使其具有靈活性。
TF:HashingTF和CountVectorizer均可用于生成項頻率向量。
HashingTF是,Transformer接受一組變量并將其轉換為固定長度的特征向量。在文本處理中,“一組變量”可能是一袋單詞。 HashingTF利用哈希理論。通過應用哈希函數將原始特征映射到索引(項)。這里使用的哈希函數是MurmurHash 3。然后根據映射的索引計算詞頻。這種方法避免了需要計算全局項到索引圖的情況,對于大型語料庫可能是昂貴的,但是會遭受潛在的哈希沖突,即哈希后不同的原始特征可能變成相同的變量。為了減少沖突的概率,可以增加目標要素的維數,即哈希表的存儲數。使用散列值的簡單模來確定向量索引,建議使用2的冪作為特征維,否則特征將不會均勻地映射到向量索引。默認特征尺寸為
。可選的二進制切換參數控制項頻率計數。當設置為true時,所有非零頻率計數都設置為1。對于模擬二進制而不是整數計數的離散概率模型特別有用。
CountVectorizer將文本文檔轉換為變量計數向量。有關更多詳細信息,請參考CountVectorizer 。
IDF:IDF是Estimator適合數據集,產生的IDFIDFModel。所述 IDFModel需要的特征向量(通常從創建HashingTF或CountVectorizer)和縮放每個特征。直觀地,會減少在語料庫中經常出現的特征的權重。
注意: spark.ml不提供用于文本分割的工具。將用戶推薦給Stanford NLP Group和 scalanlp / chalk。
例子
在下面的代碼段中,從一組句子開始。使用將每個句子分成單詞Tokenizer。對于每個句子(單詞袋),用HashingTF將句子散列為特征向量。IDF用來重新縮放特征向量;使用文本作為特征時,通常可以提高性能。然后,特征向量可以傳遞給學習算法。

? Scala
? Java
? Python
Refer to the HashingTF Scala docs and the IDF Scala docs for more details on the API.
import org.apache.spark.ml.feature.{HashingTF, IDF, Tokenizer}

val sentenceData = spark.createDataFrame(Seq(
(0.0, “Hi I heard about Spark”),
(0.0, “I wish Java could use case classes”),
(1.0, “Logistic regression models are neat”)
)).toDF(“label”, “sentence”)

val tokenizer = new Tokenizer().setInputCol(“sentence”).setOutputCol(“words”)
val wordsData = tokenizer.transform(sentenceData)

val hashingTF = new HashingTF()
.setInputCol(“words”).setOutputCol(“rawFeatures”).setNumFeatures(20)

val featurizedData = hashingTF.transform(wordsData)
// alternatively, CountVectorizer can also be used to get term frequency vectors

val idf = new IDF().setInputCol(“rawFeatures”).setOutputCol(“features”)
val idfModel = idf.fit(featurizedData)

val rescaledData = idfModel.transform(featurizedData)
rescaledData.select(“label”, “features”).show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/TfIdfExample.scala” in the Spark repo.
Word2Vec
Word2Vec is an Estimator which takes sequences of words representing documents and trains a Word2VecModel. The model maps each word to a unique fixed-size vector. The Word2VecModel transforms each document into a vector using the average of all words in the document; this vector can then be used as features for prediction, document similarity calculations, etc. Please refer to the MLlib user guide on Word2Vec for more details. Word2Vec是一個Estimator,表示文檔的單詞序列并訓練一個 Word2VecModel。該模型將每個單詞映射到唯一的固定大小的向量。使用Word2VecModel 文檔中所有單詞的平均值,將每個文檔轉換為向量;然后,可以將此向量用作預測,文檔相似度計算等的功能。有關更多詳細信息,可參考Word2Vec上的MLlib用戶指南。
Examples
In the following code segment, we start with a set of documents, each of which is represented as a sequence of words. For each document, we transform it into a feature vector. This feature vector could then be passed to a learning algorithm. 在下面的代碼段中,從一組文檔開始,每個文檔都由一個單詞序列表示。對于每個文檔,將其轉換為特征向量。然后可以將該特征向量傳遞給學習算法。
? Scala
? Java
? Python
Refer to the Word2Vec Scala docs for more details on the API.
import org.apache.spark.ml.feature.Word2Vec
import org.apache.spark.ml.linalg.Vector
import org.apache.spark.sql.Row

// Input data: Each row is a bag of words from a sentence or document.
val documentDF = spark.createDataFrame(Seq(
“Hi I heard about Spark”.split(" “),
“I wish Java could use case classes”.split(” “),
“Logistic regression models are neat”.split(” ")
).map(Tuple1.apply)).toDF(“text”)

// Learn a mapping from words to Vectors.
val word2Vec = new Word2Vec()
.setInputCol(“text”)
.setOutputCol(“result”)
.setVectorSize(3)
.setMinCount(0)
val model = word2Vec.fit(documentDF)

val result = model.transform(documentDF)
result.collect().foreach { case Row(text: Seq[_], features: Vector) =>
println(s"Text: [${text.mkString(", “)}] => \nVector: $features\n”) }
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/Word2VecExample.scala” in the Spark repo.
CountVectorizer
CountVectorizer and CountVectorizerModel aim to help convert a collection of text documents to vectors of token counts. When an a-priori dictionary is not available, CountVectorizer can be used as an Estimator to extract the vocabulary, and generates a CountVectorizerModel. The model produces sparse representations for the documents over the vocabulary, which can then be passed to other algorithms like LDA.
During the fitting process, CountVectorizer will select the top vocabSize words ordered by term frequency across the corpus. An optional parameter minDF also affects the fitting process by specifying the minimum number (or fraction if < 1.0) of documents a term must appear in to be included in the vocabulary. Another optional binary toggle parameter controls the output vector. If set to true all nonzero counts are set to 1. This is especially useful for discrete probabilistic models that model binary, rather than integer, counts.
CountVectorizer和CountVectorizerModel,幫助轉換文本文檔的集合令牌計數的載體。當先驗詞典不可用時,CountVectorizer可以用作Estimator,提取詞匯表并生成CountVectorizerModel。該模型為詞匯表上的文檔生成稀疏表示,然后可以將其傳遞給其它算法,例如LDA。
在擬合過程中,CountVectorizer將選擇vocabSize整個語料庫中,按詞頻排列的前幾個詞。可選參數minDF,通過指定一個單詞必須出現在詞匯表中的最小數量(如果小于1.0,則為小數)來影響擬合過程。另一個可選的二進制,切換參數控制輸出向量。如果將其設置為true,則所有非零計數都將設置為1。這對于模擬二進制,而不是整數計數的離散概率模型特別有用。
Examples
Assume that we have the following DataFrame with columns id and texts:
假設有以下帶有列id和 texts的DataFrame:

idtexts
0Array(“a”, “b”, “c”)
1Array(“a”, “b”, “b”, “c”, “a”)

each row in texts is a document of type Array[String]. Invoking fit of CountVectorizer produces a CountVectorizerModel with vocabulary (a, b, c). Then the output column “vector” after transformation contains: 每行texts是一個Array [String]類型的文檔。調用的契合度CountVectorizer會產生CountVectorizerModel帶有詞匯量(a,b,c)的a。然后,轉換后的輸出列“ vector”包含:

idtextsvector
0Array(“a”, “b”, “c”)(3,[0,1,2],[1.0,1.0,1.0])
1Array(“a”, “b”, “b”, “c”, “a”)(3,[0,1,2],[2.0,2.0,1.0])

Each vector represents the token counts of the document over the vocabulary.
? Scala
? Java
? Python
Refer to the CountVectorizer Scala docs and the CountVectorizerModel Scala docs for more details on the API. 有關API的更多詳細信息,參考CountVectorizer Scala文檔 和CountVectorizerModel Scala文檔。
import org.apache.spark.ml.feature.{CountVectorizer, CountVectorizerModel}

val df = spark.createDataFrame(Seq(
(0, Array(“a”, “b”, “c”)),
(1, Array(“a”, “b”, “b”, “c”, “a”))
)).toDF(“id”, “words”)

// fit a CountVectorizerModel from the corpus
val cvModel: CountVectorizerModel = new CountVectorizer()
.setInputCol(“words”)
.setOutputCol(“features”)
.setVocabSize(3)
.setMinDF(2)
.fit(df)

// alternatively, define CountVectorizerModel with a-priori vocabulary
val cvm = new CountVectorizerModel(Array(“a”, “b”, “c”))
.setInputCol(“words”)
.setOutputCol(“features”)

cvModel.transform(df).show(false)
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/CountVectorizerExample.scala” in the Spark repo.
FeatureHasher
Feature hashing projects a set of categorical or numerical features into a feature vector of specified dimension (typically substantially smaller than that of the original feature space). This is done using the hashing trick to map features to indices in the feature vector.
The FeatureHasher transformer operates on multiple columns. Each column may contain either numeric or categorical features. Behavior and handling of column data types is as follows:
? Numeric columns: For numeric features, the hash value of the column name is used to map the feature value to its index in the feature vector. By default, numeric features are not treated as categorical (even when they are integers). To treat them as categorical, specify the relevant columns using the categoricalCols parameter.
? String columns: For categorical features, the hash value of the string “column_name=value” is used to map to the vector index, with an indicator value of 1.0. Thus, categorical features are “one-hot” encoded (similarly to using OneHotEncoder with dropLast=false).
? Boolean columns: Boolean values are treated in the same way as string columns. That is, boolean features are represented as “column_name=true” or “column_name=false”, with an indicator value of 1.0.
Null (missing) values are ignored (implicitly zero in the resulting feature vector).
The hash function used here is also the MurmurHash 3 used in HashingTF. Since a simple modulo on the hashed value is used to determine the vector index, it is advisable to use a power of two as the numFeatures parameter; otherwise the features will not be mapped evenly to the vector indices.
特征哈希將一組分類或數字特征投影到指定維數的特征向量中(通常大大小于原始特征空間的特征向量)。這是通過使用哈希技巧 將特征映射到特征向量中的索引來完成的。
該FeatureHasher變壓器上多列運行。每列都可以包含數字或分類特征。列數據類型的行為和處理如下:
? 數字列:對于數字特征,列名稱的哈希值用于將特征值映射到特征向量中的索引。默認情況下,數字功能不被視為分類(即使是整數)。要將其視為分類,使用categoricalCols參數指定相關列。
? 字符串列:對于分類特征,字符串“ column_name = value”的哈希值,用于映射到向量索引,指示符值為1.0。因此,分類特征被“一次熱”編碼(類似于將OneHotEncoder與一起使用 dropLast=false)。
? 布爾列:布爾值與字符串列的處理方式相同。即,布爾特征表示為“ column_name = true”或“ column_name = false”,指示符值為1.0。
空(缺失)值將被忽略(在所得特征向量中隱式為零)。
這里使用的哈希函數也是HashingTF中 使用的MurmurHash 3。由于使用散列值的簡單模來確定向量索引,因此建議使用2的冪作為numFeatures參數;否則,建議使用2的冪。不然,這些特征將不會均勻地映射到矢量索引。

Examples
Assume that we have a DataFrame with 4 input columns real, bool, stringNum, and string. These different data types as input will illustrate the behavior of the transform to produce a column of feature vectors. 假設有4個輸入列的數據幀real,bool,stringNum,和string。這些不同的數據類型作為輸入,將生成一列特征向量的變換。

realboolstringNumstring
2.2true1foo
3.3false2bar
4.4false3baz
5.5false4foo

Then the output of FeatureHasher.transform on this DataFrame is:

realboolstringNumstringfeatures
2.2true1foo(262144,[51871, 63643,174475,253195],[1.0,1.0,2.2,1.0])
3.3false2bar(262144,[6031, 80619,140467,174475],[1.0,1.0,1.0,3.3])
4.4false3baz(262144,[24279,140467,174475,196810],[1.0,1.0,4.4,1.0])
5.5false4foo(262144,[63643,140467,168512,174475],[1.0,1.0,1.0,5.5])

The resulting feature vectors could then be passed to a learning algorithm.
? Scala
? Java
? Python
Refer to the FeatureHasher Scala docs for more details on the API.
import org.apache.spark.ml.feature.FeatureHasher

val dataset = spark.createDataFrame(Seq(
(2.2, true, “1”, “foo”),
(3.3, false, “2”, “bar”),
(4.4, false, “3”, “baz”),
(5.5, false, “4”, “foo”)
)).toDF(“real”, “bool”, “stringNum”, “string”)

val hasher = new FeatureHasher()
.setInputCols(“real”, “bool”, “stringNum”, “string”)
.setOutputCol(“features”)

val featurized = hasher.transform(dataset)
featurized.show(false)
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/FeatureHasherExample.scala” in the Spark repo.
Feature Transformers
Tokenizer
Tokenization is the process of taking text (such as a sentence) and breaking it into individual terms (usually words). A simple Tokenizer class provides this functionality. The example below shows how to split sentences into sequences of words.
RegexTokenizer allows more advanced tokenization based on regular expression (regex) matching. By default, the parameter “pattern” (regex, default: “\s+”) is used as delimiters to split the input text. Alternatively, users can set parameter “gaps” to false indicating the regex “pattern” denotes “tokens” rather than splitting gaps, and find all matching occurrences as the tokenization result.
標記化是獲取文本(例如句子),并將其分解為單個術語(通常是單詞)的過程。一個簡單的Tokenizer類提供了此功能。下面的示例顯示了如何將句子分成單詞序列。
RegexTokenizer允許基于正則表達式(regex)匹配,進行更高級的標記化。默認情況下,參數“ pattern”(正則表達式,默認值:),"\s+"用作分隔輸入文本的定界符。或者,用戶可以將參數“ gap”設置為false,以表示正則表達式“ pattern”表示“令牌”,而不是拆分間隙,并找到所有匹配的出現作為標記化結果。
Examples
? Scala
? Java
? Python
Refer to the Tokenizer Scala docs and the RegexTokenizer Scala docs for more details on the API. 有關API的更多詳細信息,可參考Tokenizer Scala文檔 和RegexTokenizer Scala文檔。
import org.apache.spark.ml.feature.{RegexTokenizer, Tokenizer}
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.functions._

val sentenceDataFrame = spark.createDataFrame(Seq(
(0, “Hi I heard about Spark”),
(1, “I wish Java could use case classes”),
(2, “Logistic,regression,models,are,neat”)
)).toDF(“id”, “sentence”)

val tokenizer = new Tokenizer().setInputCol(“sentence”).setOutputCol(“words”)
val regexTokenizer = new RegexTokenizer()
.setInputCol(“sentence”)
.setOutputCol(“words”)
.setPattern("\W") // alternatively .setPattern("\w+").setGaps(false)

val countTokens = udf { (words: Seq[String]) => words.length }

val tokenized = tokenizer.transform(sentenceDataFrame)
tokenized.select(“sentence”, “words”)
.withColumn(“tokens”, countTokens(col(“words”))).show(false)

val regexTokenized = regexTokenizer.transform(sentenceDataFrame)
regexTokenized.select(“sentence”, “words”)
.withColumn(“tokens”, countTokens(col(“words”))).show(false)
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/TokenizerExample.scala” in the Spark repo.
StopWordsRemover
Stop words are words which should be excluded from the input, typically because the words appear frequently and don’t carry as much meaning.
StopWordsRemover takes as input a sequence of strings (e.g. the output of a Tokenizer) and drops all the stop words from the input sequences. The list of stopwords is specified by the stopWords parameter. Default stop words for some languages are accessible by calling StopWordsRemover.loadDefaultStopWords(language), for which available options are “danish”, “dutch”, “english”, “finnish”, “french”, “german”, “hungarian”, “italian”, “norwegian”, “portuguese”, “russian”, “spanish”, “swedish” and “turkish”. A boolean parameter caseSensitive indicates if the matches should be case sensitive (false by default).
停用詞是應從輸入中排除的詞,通常是因為這些詞頻繁出現且含義不大。
StopWordsRemover將一個字符串序列(例如Tokenizer的輸出)作為輸入,并從輸入序列中刪除所有停用詞。停用詞列表由stopWords參數指定。可以通過調用來訪問某些語言的默認停用詞StopWordsRemover.loadDefaultStopWords(language),其可用選項為“丹麥語”,“荷蘭語”,“英語”,“芬蘭語”,“法語”,“德語”,“匈牙利語”,“意大利語”,“挪威語” ”,“葡萄牙語”,“俄語”,“西班牙語”,“瑞典語”和“土耳其語”。布爾參數caseSensitive表示匹配項是否區分大小寫(默認情況下為false)。
Examples
Assume that we have the following DataFrame with columns id and raw:

idraw
0[I, saw, the, red, balloon]
1[Mary, had, a, little, lamb]

Applying StopWordsRemover with raw as the input column and filtered as the output column, we should get the following:

idrawfiltered
0[I, saw, the, red, balloon][saw, red, balloon]
1[Mary, had, a, little, lamb][Mary, little, lamb]

In filtered, the stop words “I”, “the”, “had”, and “a” have been filtered out.
? Scala
? Java
? Python
Refer to the StopWordsRemover Scala docs for more details on the API.
import org.apache.spark.ml.feature.StopWordsRemover

val remover = new StopWordsRemover()
.setInputCol(“raw”)
.setOutputCol(“filtered”)

val dataSet = spark.createDataFrame(Seq(
(0, Seq(“I”, “saw”, “the”, “red”, “balloon”)),
(1, Seq(“Mary”, “had”, “a”, “little”, “lamb”))
)).toDF(“id”, “raw”)

remover.transform(dataSet).show(false)
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/StopWordsRemoverExample.scala” in the Spark repo.
nn-gram
An n-gram is a sequence of nn tokens (typically words) for some integer nn. The NGram class can be used to transform input features into nn-grams.
NGram takes as input a sequence of strings (e.g. the output of a Tokenizer). The parameter n is used to determine the number of terms in each nn-gram. The output will consist of a sequence of nn-grams where each nn-gram is represented by a space-delimited string of nn consecutive words. If the input sequence contains fewer than n strings, no output is produced.
Examples
? Scala
? Java
? Python
Refer to the NGram Scala docs for more details on the API.
import org.apache.spark.ml.feature.NGram

val wordDataFrame = spark.createDataFrame(Seq(
(0, Array(“Hi”, “I”, “heard”, “about”, “Spark”)),
(1, Array(“I”, “wish”, “Java”, “could”, “use”, “case”, “classes”)),
(2, Array(“Logistic”, “regression”, “models”, “are”, “neat”))
)).toDF(“id”, “words”)

val ngram = new NGram().setN(2).setInputCol(“words”).setOutputCol(“ngrams”)

val ngramDataFrame = ngram.transform(wordDataFrame)
ngramDataFrame.select(“ngrams”).show(false)
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/NGramExample.scala” in the Spark repo.
Binarizer
Binarization is the process of thresholding numerical features to binary (0/1) features.
Binarizer takes the common parameters inputCol and outputCol, as well as the threshold for binarization. Feature values greater than the threshold are binarized to 1.0; values equal to or less than the threshold are binarized to 0.0. Both Vector and Double types are supported for inputCol.
Examples
? Scala
? Java
? Python
Refer to the Binarizer Scala docs for more details on the API.
import org.apache.spark.ml.feature.Binarizer

val data = Array((0, 0.1), (1, 0.8), (2, 0.2))
val dataFrame = spark.createDataFrame(data).toDF(“id”, “feature”)

val binarizer: Binarizer = new Binarizer()
.setInputCol(“feature”)
.setOutputCol(“binarized_feature”)
.setThreshold(0.5)

val binarizedDataFrame = binarizer.transform(dataFrame)

println(s"Binarizer output with Threshold = ${binarizer.getThreshold}")
binarizedDataFrame.show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/BinarizerExample.scala” in the Spark repo.
PCA
PCA is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components. A PCA class trains a model to project vectors to a low-dimensional space using PCA. The example below shows how to project 5-dimensional feature vectors into 3-dimensional principal components.
Examples
? Scala
? Java
? Python
Refer to the PCA Scala docs for more details on the API.
import org.apache.spark.ml.feature.PCA
import org.apache.spark.ml.linalg.Vectors

val data = Array(
Vectors.sparse(5, Seq((1, 1.0), (3, 7.0))),
Vectors.dense(2.0, 0.0, 3.0, 4.0, 5.0),
Vectors.dense(4.0, 0.0, 0.0, 6.0, 7.0)
)
val df = spark.createDataFrame(data.map(Tuple1.apply)).toDF(“features”)

val pca = new PCA()
.setInputCol(“features”)
.setOutputCol(“pcaFeatures”)
.setK(3)
.fit(df)

val result = pca.transform(df).select(“pcaFeatures”)
result.show(false)
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/PCAExample.scala” in the Spark repo.
PolynomialExpansion
Polynomial expansion is the process of expanding your features into a polynomial space, which is formulated by an n-degree combination of original dimensions. A PolynomialExpansion class provides this functionality. The example below shows how to expand your features into a 3-degree polynomial space.
Examples
? Scala
? Java
? Python
Refer to the PolynomialExpansion Scala docs for more details on the API.
import org.apache.spark.ml.feature.PolynomialExpansion
import org.apache.spark.ml.linalg.Vectors

val data = Array(
Vectors.dense(2.0, 1.0),
Vectors.dense(0.0, 0.0),
Vectors.dense(3.0, -1.0)
)
val df = spark.createDataFrame(data.map(Tuple1.apply)).toDF(“features”)

val polyExpansion = new PolynomialExpansion()
.setInputCol(“features”)
.setOutputCol(“polyFeatures”)
.setDegree(3)

val polyDF = polyExpansion.transform(df)
polyDF.show(false)
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/PolynomialExpansionExample.scala” in the Spark repo.
Discrete Cosine Transform (DCT)
The Discrete Cosine Transform transforms a length NN real-valued sequence in the time domain into another length NN real-valued sequence in the frequency domain. A DCT class provides this functionality, implementing the DCT-II and scaling the result by 1/2–√1/2 such that the representing matrix for the transform is unitary. No shift is applied to the transformed sequence (e.g. the 00th element of the transformed sequence is the 00th DCT coefficient and not the N/2N/2th).
Examples
? Scala
? Java
? Python
Refer to the DCT Scala docs for more details on the API.
import org.apache.spark.ml.feature.DCT
import org.apache.spark.ml.linalg.Vectors

val data = Seq(
Vectors.dense(0.0, 1.0, -2.0, 3.0),
Vectors.dense(-1.0, 2.0, 4.0, -7.0),
Vectors.dense(14.0, -2.0, -5.0, 1.0))

val df = spark.createDataFrame(data.map(Tuple1.apply)).toDF(“features”)

val dct = new DCT()
.setInputCol(“features”)
.setOutputCol(“featuresDCT”)
.setInverse(false)

val dctDf = dct.transform(df)
dctDf.select(“featuresDCT”).show(false)
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/DCTExample.scala” in the Spark repo.
StringIndexer
StringIndexer encodes a string column of labels to a column of label indices. StringIndexer can encode multiple columns. The indices are in [0, numLabels), and four ordering options are supported: “frequencyDesc”: descending order by label frequency (most frequent label assigned 0), “frequencyAsc”: ascending order by label frequency (least frequent label assigned 0), “alphabetDesc”: descending alphabetical order, and “alphabetAsc”: ascending alphabetical order (default = “frequencyDesc”). Note that in case of equal frequency when under “frequencyDesc”/”frequencyAsc”, the strings are further sorted by alphabet.
The unseen labels will be put at index numLabels if user chooses to keep them. If the input column is numeric, we cast it to string and index the string values. When downstream pipeline components such as Estimator or Transformer make use of this string-indexed label, you must set the input column of the component to this string-indexed column name. In many cases, you can set the input column with setInputCol.
Examples
Assume that we have the following DataFrame with columns id and category:

idcategory
0a
1b
2c
3a
4a
5c

category is a string column with three labels: “a”, “b”, and “c”. Applying StringIndexer with category as the input column and categoryIndex as the output column, we should get the following:

idcategorycategoryIndex
0a0.0
1b2.0
2c1.0
3a0.0
4a0.0
5c1.0

“a” gets index 0 because it is the most frequent, followed by “c” with index 1 and “b” with index 2.
Additionally, there are three strategies regarding how StringIndexer will handle unseen labels when you have fit a StringIndexer on one dataset and then use it to transform another:
? throw an exception (which is the default)
? skip the row containing the unseen label entirely
? put unseen labels in a special additional bucket, at index numLabels
Examples
Let’s go back to our previous example but this time reuse our previously defined StringIndexer on the following dataset:

idcategory
0a
1b
2c
3d
4e

If you’ve not set how StringIndexer handles unseen labels or set it to “error”, an exception will be thrown. However, if you had called setHandleInvalid(“skip”), the following dataset will be generated:

idcategorycategoryIndex
0a0.0
1b2.0
2c1.0

Notice that the rows containing “d” or “e” do not appear.
If you call setHandleInvalid(“keep”), the following dataset will be generated:

idcategorycategoryIndex
0a0.0
1b2.0
2c1.0
3d3.0
4e3.0

Notice that the rows containing “d” or “e” are mapped to index “3.0”
? Scala
? Java
? Python
Refer to the StringIndexer Scala docs for more details on the API.
import org.apache.spark.ml.feature.StringIndexer

val df = spark.createDataFrame(
Seq((0, “a”), (1, “b”), (2, “c”), (3, “a”), (4, “a”), (5, “c”))
).toDF(“id”, “category”)

val indexer = new StringIndexer()
.setInputCol(“category”)
.setOutputCol(“categoryIndex”)

val indexed = indexer.fit(df).transform(df)
indexed.show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/StringIndexerExample.scala” in the Spark repo.
IndexToString
Symmetrically to StringIndexer, IndexToString maps a column of label indices back to a column containing the original labels as strings. A common use case is to produce indices from labels with StringIndexer, train a model with those indices and retrieve the original labels from the column of predicted indices with IndexToString. However, you are free to supply your own labels.
Examples
Building on the StringIndexer example, let’s assume we have the following DataFrame with columns id and categoryIndex:

idcategoryIndex
00.0
12.0
21.0
30.0
40.0
51.0

Applying IndexToString with categoryIndex as the input column, originalCategory as the output column, we are able to retrieve our original labels (they will be inferred from the columns’ metadata):

idcategoryIndexoriginalCategory
00.0a
12.0b
21.0c
30.0a
40.0a
51.0c

? Scala
? Java
? Python
Refer to the IndexToString Scala docs for more details on the API.
import org.apache.spark.ml.attribute.Attribute
import org.apache.spark.ml.feature.{IndexToString, StringIndexer}

val df = spark.createDataFrame(Seq(
(0, “a”),
(1, “b”),
(2, “c”),
(3, “a”),
(4, “a”),
(5, “c”)
)).toDF(“id”, “category”)

val indexer = new StringIndexer()
.setInputCol(“category”)
.setOutputCol(“categoryIndex”)
.fit(df)
val indexed = indexer.transform(df)

println(s"Transformed string column ‘indexer.getInputCol′"+s"toindexedcolumn′{indexer.getInputCol}' " + s"to indexed column 'indexer.getInputCol"+s"toindexedcolumn{indexer.getOutputCol}’")
indexed.show()

val inputColSchema = indexed.schema(indexer.getOutputCol)
println(s"StringIndexer will store labels in output column metadata: " +
s"${Attribute.fromStructField(inputColSchema).toString}\n")

val converter = new IndexToString()
.setInputCol(“categoryIndex”)
.setOutputCol(“originalCategory”)

val converted = converter.transform(indexed)

println(s"Transformed indexed column ‘converter.getInputCol′backtooriginalstring"+s"column′{converter.getInputCol}' back to original string " + s"column 'converter.getInputColbacktooriginalstring"+s"column{converter.getOutputCol}’ using labels in metadata")
converted.select(“id”, “categoryIndex”, “originalCategory”).show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/IndexToStringExample.scala” in the Spark repo.
OneHotEncoder
One-hot encoding maps a categorical feature, represented as a label index, to a binary vector with at most a single one-value indicating the presence of a specific feature value from among the set of all feature values. This encoding allows algorithms which expect continuous features, such as Logistic Regression, to use categorical features. For string type input data, it is common to encode categorical features using StringIndexer first.
OneHotEncoder can transform multiple columns, returning an one-hot-encoded output vector column for each input column. It is common to merge these vectors into a single feature vector using VectorAssembler.
OneHotEncoder supports the handleInvalid parameter to choose how to handle invalid input during transforming data. Available options include ‘keep’ (any invalid inputs are assigned to an extra categorical index) and ‘error’ (throw an error).
Examples
? Scala
? Java
? Python
Refer to the OneHotEncoder Scala docs for more details on the API.
import org.apache.spark.ml.feature.OneHotEncoder

val df = spark.createDataFrame(Seq(
(0.0, 1.0),
(1.0, 0.0),
(2.0, 1.0),
(0.0, 2.0),
(0.0, 1.0),
(2.0, 0.0)
)).toDF(“categoryIndex1”, “categoryIndex2”)

val encoder = new OneHotEncoder()
.setInputCols(Array(“categoryIndex1”, “categoryIndex2”))
.setOutputCols(Array(“categoryVec1”, “categoryVec2”))
val model = encoder.fit(df)

val encoded = model.transform(df)
encoded.show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/OneHotEncoderExample.scala” in the Spark repo.
VectorIndexer
VectorIndexer helps index categorical features in datasets of Vectors. It can both automatically decide which features are categorical and convert original values to category indices. Specifically, it does the following:

  1. Take an input column of type Vector and a parameter maxCategories.
  2. Decide which features should be categorical based on the number of distinct values, where features with at most maxCategories are declared categorical.
  3. Compute 0-based category indices for each categorical feature.
  4. Index categorical features and transform original feature values to indices.
    Indexing categorical features allows algorithms such as Decision Trees and Tree Ensembles to treat categorical features appropriately, improving performance.
    Examples
    In the example below, we read in a dataset of labeled points and then use VectorIndexer to decide which features should be treated as categorical. We transform the categorical feature values to their indices. This transformed data could then be passed to algorithms such as DecisionTreeRegressor that handle categorical features.
    ? Scala
    ? Java
    ? Python
    Refer to the VectorIndexer Scala docs for more details on the API.
    import org.apache.spark.ml.feature.VectorIndexer

val data = spark.read.format(“libsvm”).load(“data/mllib/sample_libsvm_data.txt”)

val indexer = new VectorIndexer()
.setInputCol(“features”)
.setOutputCol(“indexed”)
.setMaxCategories(10)

val indexerModel = indexer.fit(data)

val categoricalFeatures: Set[Int] = indexerModel.categoryMaps.keys.toSet
println(s"Chose ${categoricalFeatures.size} " +
s"categorical features: ${categoricalFeatures.mkString(", “)}”)

// Create new column “indexed” with categorical values transformed to indices
val indexedData = indexerModel.transform(data)
indexedData.show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/VectorIndexerExample.scala” in the Spark repo.
Interaction
Interaction is a Transformer which takes vector or double-valued columns, and generates a single vector column that contains the product of all combinations of one value from each input column.
For example, if you have 2 vector type columns each of which has 3 dimensions as input columns, then you’ll get a 9-dimensional vector as the output column.
Examples
Assume that we have the following DataFrame with the columns “id1”, “vec1”, and “vec2”:

id1vec1vec2
1[1.0,2.0,3.0][8.0,4.0,5.0]
2[4.0,3.0,8.0][7.0,9.0,8.0]
3[6.0,1.0,9.0][2.0,3.0,6.0]
4[10.0,8.0,6.0][9.0,4.0,5.0]
5[9.0,2.0,7.0][10.0,7.0,3.0]
6[1.0,1.0,4.0][2.0,8.0,4.0]

Applying Interaction with those input columns, then interactedCol as the output column contains:

id1vec1vec2interactedCol
1[1.0,2.0,3.0][8.0,4.0,5.0][8.0,4.0,5.0,16.0,8.0,10.0,24.0,12.0,15.0]
2[4.0,3.0,8.0][7.0,9.0,8.0][56.0,72.0,64.0,42.0,54.0,48.0,112.0,144.0,128.0]
3[6.0,1.0,9.0][2.0,3.0,6.0][36.0,54.0,108.0,6.0,9.0,18.0,54.0,81.0,162.0]
4[10.0,8.0,6.0][9.0,4.0,5.0][360.0,160.0,200.0,288.0,128.0,160.0,216.0,96.0,120.0]
5[9.0,2.0,7.0][10.0,7.0,3.0][450.0,315.0,135.0,100.0,70.0,30.0,350.0,245.0,105.0]
6[1.0,1.0,4.0][2.0,8.0,4.0][12.0,48.0,24.0,12.0,48.0,24.0,48.0,192.0,96.0]

? Scala
? Java
? Python
Refer to the Interaction Scala docs for more details on the API.
import org.apache.spark.ml.feature.Interaction
import org.apache.spark.ml.feature.VectorAssembler

val df = spark.createDataFrame(Seq(
(1, 1, 2, 3, 8, 4, 5),
(2, 4, 3, 8, 7, 9, 8),
(3, 6, 1, 9, 2, 3, 6),
(4, 10, 8, 6, 9, 4, 5),
(5, 9, 2, 7, 10, 7, 3),
(6, 1, 1, 4, 2, 8, 4)
)).toDF(“id1”, “id2”, “id3”, “id4”, “id5”, “id6”, “id7”)

val assembler1 = new VectorAssembler().
setInputCols(Array(“id2”, “id3”, “id4”)).
setOutputCol(“vec1”)

val assembled1 = assembler1.transform(df)

val assembler2 = new VectorAssembler().
setInputCols(Array(“id5”, “id6”, “id7”)).
setOutputCol(“vec2”)

val assembled2 = assembler2.transform(assembled1).select(“id1”, “vec1”, “vec2”)

val interaction = new Interaction()
.setInputCols(Array(“id1”, “vec1”, “vec2”))
.setOutputCol(“interactedCol”)

val interacted = interaction.transform(assembled2)

interacted.show(truncate = false)
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/InteractionExample.scala” in the Spark repo.
Normalizer
Normalizer is a Transformer which transforms a dataset of Vector rows, normalizing each Vector to have unit norm. It takes parameter p, which specifies the p-norm used for normalization. (p=2p=2 by default.) This normalization can help standardize your input data and improve the behavior of learning algorithms.
Examples
The following example demonstrates how to load a dataset in libsvm format and then normalize each row to have unit L1L1 norm and unit L∞L∞ norm.
? Scala
? Java
? Python
Refer to the Normalizer Scala docs for more details on the API.
import org.apache.spark.ml.feature.Normalizer
import org.apache.spark.ml.linalg.Vectors

val dataFrame = spark.createDataFrame(Seq(
(0, Vectors.dense(1.0, 0.5, -1.0)),
(1, Vectors.dense(2.0, 1.0, 1.0)),
(2, Vectors.dense(4.0, 10.0, 2.0))
)).toDF(“id”, “features”)

// Normalize each Vector using L1L^1L1 norm.
val normalizer = new Normalizer()
.setInputCol(“features”)
.setOutputCol(“normFeatures”)
.setP(1.0)

val l1NormData = normalizer.transform(dataFrame)
println(“Normalized using L^1 norm”)
l1NormData.show()

// Normalize each Vector using L∞L^\inftyL norm.
val lInfNormData = normalizer.transform(dataFrame, normalizer.p -> Double.PositiveInfinity)
println(“Normalized using L^inf norm”)
lInfNormData.show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/NormalizerExample.scala” in the Spark repo.
StandardScaler
StandardScaler transforms a dataset of Vector rows, normalizing each feature to have unit standard deviation and/or zero mean. It takes parameters:
? withStd: True by default. Scales the data to unit standard deviation.
? withMean: False by default. Centers the data with mean before scaling. It will build a dense output, so take care when applying to sparse input.
StandardScaler is an Estimator which can be fit on a dataset to produce a StandardScalerModel; this amounts to computing summary statistics. The model can then transform a Vector column in a dataset to have unit standard deviation and/or zero mean features.
Note that if the standard deviation of a feature is zero, it will return default 0.0 value in the Vector for that feature.
Examples
The following example demonstrates how to load a dataset in libsvm format and then normalize each feature to have unit standard deviation.
? Scala
? Java
? Python
Refer to the StandardScaler Scala docs for more details on the API.
import org.apache.spark.ml.feature.StandardScaler

val dataFrame = spark.read.format(“libsvm”).load(“data/mllib/sample_libsvm_data.txt”)

val scaler = new StandardScaler()
.setInputCol(“features”)
.setOutputCol(“scaledFeatures”)
.setWithStd(true)
.setWithMean(false)

// Compute summary statistics by fitting the StandardScaler.
val scalerModel = scaler.fit(dataFrame)

// Normalize each feature to have unit standard deviation.
val scaledData = scalerModel.transform(dataFrame)
scaledData.show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/StandardScalerExample.scala” in the Spark repo.
RobustScaler
RobustScaler transforms a dataset of Vector rows, removing the median and scaling the data according to a specific quantile range (by default the IQR: Interquartile Range, quantile range between the 1st quartile and the 3rd quartile). Its behavior is quite similar to StandardScaler, however the median and the quantile range are used instead of mean and standard deviation, which make it robust to outliers. It takes parameters:
? lower: 0.25 by default. Lower quantile to calculate quantile range, shared by all features.
? upper: 0.75 by default. Upper quantile to calculate quantile range, shared by all features.
? withScaling: True by default. Scales the data to quantile range.
? withCentering: False by default. Centers the data with median before scaling. It will build a dense output, so take care when applying to sparse input.
RobustScaler is an Estimator which can be fit on a dataset to produce a RobustScalerModel; this amounts to computing quantile statistics. The model can then transform a Vector column in a dataset to have unit quantile range and/or zero median features.
Note that if the quantile range of a feature is zero, it will return default 0.0 value in the Vector for that feature.
Examples
The following example demonstrates how to load a dataset in libsvm format and then normalize each feature to have unit quantile range.
? Scala
? Java
? Python
Refer to the RobustScaler Scala docs for more details on the API.
import org.apache.spark.ml.feature.RobustScaler

val dataFrame = spark.read.format(“libsvm”).load(“data/mllib/sample_libsvm_data.txt”)

val scaler = new RobustScaler()
.setInputCol(“features”)
.setOutputCol(“scaledFeatures”)
.setWithScaling(true)
.setWithCentering(false)
.setLower(0.25)
.setUpper(0.75)

// Compute summary statistics by fitting the RobustScaler.
val scalerModel = scaler.fit(dataFrame)

// Transform each feature to have unit quantile range.
val scaledData = scalerModel.transform(dataFrame)
scaledData.show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/RobustScalerExample.scala” in the Spark repo.
MinMaxScaler
MinMaxScaler transforms a dataset of Vector rows, rescaling each feature to a specific range (often [0, 1]). It takes parameters:
? min: 0.0 by default. Lower bound after transformation, shared by all features.
? max: 1.0 by default. Upper bound after transformation, shared by all features.
MinMaxScaler computes summary statistics on a data set and produces a MinMaxScalerModel. The model can then transform each feature individually such that it is in the given range.
The rescaled value for a feature E is calculated as,
Rescaled(ei)=ei?EminEmax?Emin?(max?min)+min(1)(1)Rescaled(ei)=ei?EminEmax?Emin?(max?min)+min
For the case EmaxEminEmaxEmin, Rescaled(ei)=0.5?(max+min)Rescaled(ei)=0.5?(max+min)
Note that since zero values will probably be transformed to non-zero values, output of the transformer will be DenseVector even for sparse input.
Examples
The following example demonstrates how to load a dataset in libsvm format and then rescale each feature to [0, 1].
? Scala
? Java
? Python
Refer to the MinMaxScaler Scala docs and the MinMaxScalerModel Scala docs for more details on the API.
import org.apache.spark.ml.feature.MinMaxScaler
import org.apache.spark.ml.linalg.Vectors

val dataFrame = spark.createDataFrame(Seq(
(0, Vectors.dense(1.0, 0.1, -1.0)),
(1, Vectors.dense(2.0, 1.1, 1.0)),
(2, Vectors.dense(3.0, 10.1, 3.0))
)).toDF(“id”, “features”)

val scaler = new MinMaxScaler()
.setInputCol(“features”)
.setOutputCol(“scaledFeatures”)

// Compute summary statistics and generate MinMaxScalerModel
val scalerModel = scaler.fit(dataFrame)

// rescale each feature to range [min, max].
val scaledData = scalerModel.transform(dataFrame)
println(s"Features scaled to range: [${scaler.getMin}, ${scaler.getMax}]")
scaledData.select(“features”, “scaledFeatures”).show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/MinMaxScalerExample.scala” in the Spark repo.
MaxAbsScaler
MaxAbsScaler transforms a dataset of Vector rows, rescaling each feature to range [-1, 1] by dividing through the maximum absolute value in each feature. It does not shift/center the data, and thus does not destroy any sparsity.
MaxAbsScaler computes summary statistics on a data set and produces a MaxAbsScalerModel. The model can then transform each feature individually to range [-1, 1].
Examples
The following example demonstrates how to load a dataset in libsvm format and then rescale each feature to [-1, 1].
? Scala
? Java
? Python
Refer to the MaxAbsScaler Scala docs and the MaxAbsScalerModel Scala docs for more details on the API.
import org.apache.spark.ml.feature.MaxAbsScaler
import org.apache.spark.ml.linalg.Vectors

val dataFrame = spark.createDataFrame(Seq(
(0, Vectors.dense(1.0, 0.1, -8.0)),
(1, Vectors.dense(2.0, 1.0, -4.0)),
(2, Vectors.dense(4.0, 10.0, 8.0))
)).toDF(“id”, “features”)

val scaler = new MaxAbsScaler()
.setInputCol(“features”)
.setOutputCol(“scaledFeatures”)

// Compute summary statistics and generate MaxAbsScalerModel
val scalerModel = scaler.fit(dataFrame)

// rescale each feature to range [-1, 1]
val scaledData = scalerModel.transform(dataFrame)
scaledData.select(“features”, “scaledFeatures”).show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/MaxAbsScalerExample.scala” in the Spark repo.
Bucketizer
Bucketizer transforms a column of continuous features to a column of feature buckets, where the buckets are specified by users. It takes a parameter:
? splits: Parameter for mapping continuous features into buckets. With n+1 splits, there are n buckets. A bucket defined by splits x,y holds values in the range [x,y) except the last bucket, which also includes y. Splits should be strictly increasing. Values at -inf, inf must be explicitly provided to cover all Double values; Otherwise, values outside the splits specified will be treated as errors. Two examples of splits are Array(Double.NegativeInfinity, 0.0, 1.0, Double.PositiveInfinity) and Array(0.0, 1.0, 2.0).
Note that if you have no idea of the upper and lower bounds of the targeted column, you should add Double.NegativeInfinity and Double.PositiveInfinity as the bounds of your splits to prevent a potential out of Bucketizer bounds exception.
Note also that the splits that you provided have to be in strictly increasing order, i.e. s0 < s1 < s2 < … < sn.
More details can be found in the API docs for Bucketizer.
Examples
The following example demonstrates how to bucketize a column of Doubles into another index-wised column.
? Scala
? Java
? Python
Refer to the Bucketizer Scala docs for more details on the API.
import org.apache.spark.ml.feature.Bucketizer

val splits = Array(Double.NegativeInfinity, -0.5, 0.0, 0.5, Double.PositiveInfinity)

val data = Array(-999.9, -0.5, -0.3, 0.0, 0.2, 999.9)
val dataFrame = spark.createDataFrame(data.map(Tuple1.apply)).toDF(“features”)

val bucketizer = new Bucketizer()
.setInputCol(“features”)
.setOutputCol(“bucketedFeatures”)
.setSplits(splits)

// Transform original data into its bucket index.
val bucketedData = bucketizer.transform(dataFrame)

println(s"Bucketizer output with ${bucketizer.getSplits.length-1} buckets")
bucketedData.show()

val splitsArray = Array(
Array(Double.NegativeInfinity, -0.5, 0.0, 0.5, Double.PositiveInfinity),
Array(Double.NegativeInfinity, -0.3, 0.0, 0.3, Double.PositiveInfinity))

val data2 = Array(
(-999.9, -999.9),
(-0.5, -0.2),
(-0.3, -0.1),
(0.0, 0.0),
(0.2, 0.4),
(999.9, 999.9))
val dataFrame2 = spark.createDataFrame(data2).toDF(“features1”, “features2”)

val bucketizer2 = new Bucketizer()
.setInputCols(Array(“features1”, “features2”))
.setOutputCols(Array(“bucketedFeatures1”, “bucketedFeatures2”))
.setSplitsArray(splitsArray)

// Transform original data into its bucket index.
val bucketedData2 = bucketizer2.transform(dataFrame2)

println(s"Bucketizer output with [" +
s"bucketizer2.getSplitsArray(0).length?1,"+s"{bucketizer2.getSplitsArray(0).length-1}, " + s"bucketizer2.getSplitsArray(0).length?1,"+s"{bucketizer2.getSplitsArray(1).length-1}] buckets for each input column")
bucketedData2.show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/BucketizerExample.scala” in the Spark repo.
ElementwiseProduct
ElementwiseProduct multiplies each input vector by a provided “weight” vector, using element-wise multiplication. In other words, it scales each column of the dataset by a scalar multiplier. This represents the Hadamard product between the input vector, v and transforming vector, w, to yield a result vector.
????v1?vN????°????w1?wN????=????v1w1?vNwN????(v1?vN)°(w1?wN)=(v1w1?vNwN)
Examples
This example below demonstrates how to transform vectors using a transforming vector value.
? Scala
? Java
? Python
Refer to the ElementwiseProduct Scala docs for more details on the API.
import org.apache.spark.ml.feature.ElementwiseProduct
import org.apache.spark.ml.linalg.Vectors

// Create some vector data; also works for sparse vectors
val dataFrame = spark.createDataFrame(Seq(
(“a”, Vectors.dense(1.0, 2.0, 3.0)),
(“b”, Vectors.dense(4.0, 5.0, 6.0)))).toDF(“id”, “vector”)

val transformingVector = Vectors.dense(0.0, 1.0, 2.0)
val transformer = new ElementwiseProduct()
.setScalingVec(transformingVector)
.setInputCol(“vector”)
.setOutputCol(“transformedVector”)

// Batch transform the vectors to create new column:
transformer.transform(dataFrame).show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/ElementwiseProductExample.scala” in the Spark repo.
SQLTransformer
SQLTransformer implements the transformations which are defined by SQL statement. Currently, we only support SQL syntax like “SELECT … FROM THIS …” where “THIS” represents the underlying table of the input dataset. The select clause specifies the fields, constants, and expressions to display in the output, and can be any select clause that Spark SQL supports. Users can also use Spark SQL built-in function and UDFs to operate on these selected columns. For example, SQLTransformer supports statements like:
? SELECT a, a + b AS a_b FROM THIS
? SELECT a, SQRT(b) AS b_sqrt FROM THIS where a > 5
? SELECT a, b, SUM? AS c_sum FROM THIS GROUP BY a, b
Examples
Assume that we have the following DataFrame with columns id, v1 and v2:

idv1v2
01.03.0
22.05.0

This is the output of the SQLTransformer with statement “SELECT *, (v1 + v2) AS v3, (v1 * v2) AS v4 FROM THIS”:

idv1v2v3v4
01.03.04.03.0
22.05.07.010.0

? Scala
? Java
? Python
Refer to the SQLTransformer Scala docs for more details on the API.
import org.apache.spark.ml.feature.SQLTransformer

val df = spark.createDataFrame(
Seq((0, 1.0, 3.0), (2, 2.0, 5.0))).toDF(“id”, “v1”, “v2”)

val sqlTrans = new SQLTransformer().setStatement(
“SELECT *, (v1 + v2) AS v3, (v1 * v2) AS v4 FROM THIS”)

sqlTrans.transform(df).show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/SQLTransformerExample.scala” in the Spark repo.
VectorAssembler
VectorAssembler is a transformer that combines a given list of columns into a single vector column. It is useful for combining raw features and features generated by different feature transformers into a single feature vector, in order to train ML models like logistic regression and decision trees. VectorAssembler accepts the following input column types: all numeric types, boolean type, and vector type. In each row, the values of the input columns will be concatenated into a vector in the specified order.
Examples
Assume that we have a DataFrame with the columns id, hour, mobile, userFeatures, and clicked:

idhourmobileuserFeaturesclicked
0181.0[0.0, 10.0, 0.5]1.0

userFeatures is a vector column that contains three user features. We want to combine hour, mobile, and userFeatures into a single feature vector called features and use it to predict clicked or not. If we set VectorAssembler’s input columns to hour, mobile, and userFeatures and output column to features, after transformation we should get the following DataFrame:

idhourmobileuserFeaturesclickedfeatures
0181.0[0.0, 10.0, 0.5]1.0[18.0, 1.0, 0.0, 10.0, 0.5]

? Scala
? Java
? Python
Refer to the VectorAssembler Scala docs for more details on the API.
import org.apache.spark.ml.feature.VectorAssembler
import org.apache.spark.ml.linalg.Vectors

val dataset = spark.createDataFrame(
Seq((0, 18, 1.0, Vectors.dense(0.0, 10.0, 0.5), 1.0))
).toDF(“id”, “hour”, “mobile”, “userFeatures”, “clicked”)

val assembler = new VectorAssembler()
.setInputCols(Array(“hour”, “mobile”, “userFeatures”))
.setOutputCol(“features”)

val output = assembler.transform(dataset)
println(“Assembled columns ‘hour’, ‘mobile’, ‘userFeatures’ to vector column ‘features’”)
output.select(“features”, “clicked”).show(false)
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/VectorAssemblerExample.scala” in the Spark repo.
VectorSizeHint
It can sometimes be useful to explicitly specify the size of the vectors for a column of VectorType. For example, VectorAssembler uses size information from its input columns to produce size information and metadata for its output column. While in some cases this information can be obtained by inspecting the contents of the column, in a streaming dataframe the contents are not available until the stream is started. VectorSizeHint allows a user to explicitly specify the vector size for a column so that VectorAssembler, or other transformers that might need to know vector size, can use that column as an input.
To use VectorSizeHint a user must set the inputCol and size parameters. Applying this transformer to a dataframe produces a new dataframe with updated metadata for inputCol specifying the vector size. Downstream operations on the resulting dataframe can get this size using the metadata.
VectorSizeHint can also take an optional handleInvalid parameter which controls its behaviour when the vector column contains nulls or vectors of the wrong size. By default handleInvalid is set to “error”, indicating an exception should be thrown. This parameter can also be set to “skip”, indicating that rows containing invalid values should be filtered out from the resulting dataframe, or “optimistic”, indicating that the column should not be checked for invalid values and all rows should be kept. Note that the use of “optimistic” can cause the resulting dataframe to be in an inconsistent state, meaning the metadata for the column VectorSizeHint was applied to does not match the contents of that column. Users should take care to avoid this kind of inconsistent state.
? Scala
? Java
? Python
Refer to the VectorSizeHint Scala docs for more details on the API.
import org.apache.spark.ml.feature.{VectorAssembler, VectorSizeHint}
import org.apache.spark.ml.linalg.Vectors

val dataset = spark.createDataFrame(
Seq(
(0, 18, 1.0, Vectors.dense(0.0, 10.0, 0.5), 1.0),
(0, 18, 1.0, Vectors.dense(0.0, 10.0), 0.0))
).toDF(“id”, “hour”, “mobile”, “userFeatures”, “clicked”)

val sizeHint = new VectorSizeHint()
.setInputCol(“userFeatures”)
.setHandleInvalid(“skip”)
.setSize(3)

val datasetWithSize = sizeHint.transform(dataset)
println(“Rows where ‘userFeatures’ is not the right size are filtered out”)
datasetWithSize.show(false)

val assembler = new VectorAssembler()
.setInputCols(Array(“hour”, “mobile”, “userFeatures”))
.setOutputCol(“features”)

// This dataframe can be used by downstream transformers as before
val output = assembler.transform(datasetWithSize)
println(“Assembled columns ‘hour’, ‘mobile’, ‘userFeatures’ to vector column ‘features’”)
output.select(“features”, “clicked”).show(false)
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/VectorSizeHintExample.scala” in the Spark repo.
QuantileDiscretizer
QuantileDiscretizer takes a column with continuous features and outputs a column with binned categorical features. The number of bins is set by the numBuckets parameter. It is possible that the number of buckets used will be smaller than this value, for example, if there are too few distinct values of the input to create enough distinct quantiles.
NaN values: NaN values will be removed from the column during QuantileDiscretizer fitting. This will produce a Bucketizer model for making predictions. During the transformation, Bucketizer will raise an error when it finds NaN values in the dataset, but the user can also choose to either keep or remove NaN values within the dataset by setting handleInvalid. If the user chooses to keep NaN values, they will be handled specially and placed into their own bucket, for example, if 4 buckets are used, then non-NaN data will be put into buckets[0-3], but NaNs will be counted in a special bucket[4].
Algorithm: The bin ranges are chosen using an approximate algorithm (see the documentation for approxQuantile for a detailed description). The precision of the approximation can be controlled with the relativeError parameter. When set to zero, exact quantiles are calculated (Note: Computing exact quantiles is an expensive operation). The lower and upper bin bounds will be -Infinity and +Infinity covering all real values.
Examples
Assume that we have a DataFrame with the columns id, hour:

idhour
018.0
----------
119.0
----------
28.0
----------
35.0
----------
42.2

hour is a continuous feature with Double type. We want to turn the continuous feature into a categorical one. Given numBuckets = 3, we should get the following DataFrame:

idhourresult
018.02.0
----------------
119.02.0
----------------
28.01.0
----------------
35.01.0
----------------
42.20.0

? Scala
? Java
? Python
Refer to the QuantileDiscretizer Scala docs for more details on the API.
import org.apache.spark.ml.feature.QuantileDiscretizer

val data = Array((0, 18.0), (1, 19.0), (2, 8.0), (3, 5.0), (4, 2.2))
val df = spark.createDataFrame(data).toDF(“id”, “hour”)

val discretizer = new QuantileDiscretizer()
.setInputCol(“hour”)
.setOutputCol(“result”)
.setNumBuckets(3)

val result = discretizer.fit(df).transform(df)
result.show(false)
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/QuantileDiscretizerExample.scala” in the Spark repo.
Imputer
The Imputer estimator completes missing values in a dataset, either using the mean or the median of the columns in which the missing values are located. The input columns should be of numeric type. Currently Imputer does not support categorical features and possibly creates incorrect values for columns containing categorical features. Imputer can impute custom values other than ‘NaN’ by .setMissingValue(custom_value). For example, .setMissingValue(0) will impute all occurrences of (0).
Note all null values in the input columns are treated as missing, and so are also imputed.
Examples
Suppose that we have a DataFrame with the columns a and b:
a | b
------------|-----------
1.0 | Double.NaN
2.0 | Double.NaN
Double.NaN | 3.0
4.0 | 4.0
5.0 | 5.0
In this example, Imputer will replace all occurrences of Double.NaN (the default for the missing value) with the mean (the default imputation strategy) computed from the other values in the corresponding columns. In this example, the surrogate values for columns a and b are 3.0 and 4.0 respectively. After transformation, the missing values in the output columns will be replaced by the surrogate value for the relevant column.
a | b | out_a | out_b
------------|------------|-------|-------
1.0 | Double.NaN | 1.0 | 4.0
2.0 | Double.NaN | 2.0 | 4.0
Double.NaN | 3.0 | 3.0 | 3.0
4.0 | 4.0 | 4.0 | 4.0
5.0 | 5.0 | 5.0 | 5.0
? Scala
? Java
? Python
Refer to the Imputer Scala docs for more details on the API.
import org.apache.spark.ml.feature.Imputer

val df = spark.createDataFrame(Seq(
(1.0, Double.NaN),
(2.0, Double.NaN),
(Double.NaN, 3.0),
(4.0, 4.0),
(5.0, 5.0)
)).toDF(“a”, “b”)

val imputer = new Imputer()
.setInputCols(Array(“a”, “b”))
.setOutputCols(Array(“out_a”, “out_b”))

val model = imputer.fit(df)
model.transform(df).show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/ImputerExample.scala” in the Spark repo.
Feature Selectors
VectorSlicer
VectorSlicer is a transformer that takes a feature vector and outputs a new feature vector with a sub-array of the original features. It is useful for extracting features from a vector column.
VectorSlicer accepts a vector column with specified indices, then outputs a new vector column whose values are selected via those indices. There are two types of indices,

  1. Integer indices that represent the indices into the vector, setIndices().
  2. String indices that represent the names of features into the vector, setNames(). This requires the vector column to have an AttributeGroup since the implementation matches on the name field of an Attribute.
    Specification by integer and string are both acceptable. Moreover, you can use integer index and string name simultaneously. At least one feature must be selected. Duplicate features are not allowed, so there can be no overlap between selected indices and names. Note that if names of features are selected, an exception will be thrown if empty input attributes are encountered.
    The output vector will order features with the selected indices first (in the order given), followed by the selected names (in the order given).
    Examples
    Suppose that we have a DataFrame with the column userFeatures:
    userFeatures

[0.0, 10.0, 0.5]
userFeatures is a vector column that contains three user features. Assume that the first column of userFeatures are all zeros, so we want to remove it and select only the last two columns. The VectorSlicer selects the last two elements with setIndices(1, 2) then produces a new vector column named features:

userFeaturesfeatures
[0.0, 10.0, 0.5][10.0, 0.5]

Suppose also that we have potential input attributes for the userFeatures, i.e. [“f1”, “f2”, “f3”], then we can use setNames(“f2”, “f3”) to select them.

userFeaturesfeatures
[0.0, 10.0, 0.5][10.0, 0.5]
[“f1”, “f2”, “f3”][“f2”, “f3”]

? Scala
? Java
? Python
Refer to the VectorSlicer Scala docs for more details on the API.
import java.util.Arrays

import org.apache.spark.ml.attribute.{Attribute, AttributeGroup, NumericAttribute}
import org.apache.spark.ml.feature.VectorSlicer
import org.apache.spark.ml.linalg.Vectors
import org.apache.spark.sql.{Row, SparkSession}
import org.apache.spark.sql.types.StructType

val data = Arrays.asList(
Row(Vectors.sparse(3, Seq((0, -2.0), (1, 2.3)))),
Row(Vectors.dense(-2.0, 2.3, 0.0))
)

val defaultAttr = NumericAttribute.defaultAttr
val attrs = Array(“f1”, “f2”, “f3”).map(defaultAttr.withName)
val attrGroup = new AttributeGroup(“userFeatures”, attrs.asInstanceOf[Array[Attribute]])

val dataset = spark.createDataFrame(data, StructType(Array(attrGroup.toStructField())))

val slicer = new VectorSlicer().setInputCol(“userFeatures”).setOutputCol(“features”)

slicer.setIndices(Array(1)).setNames(Array(“f3”))
// or slicer.setIndices(Array(1, 2)), or slicer.setNames(Array(“f2”, “f3”))

val output = slicer.transform(dataset)
output.show(false)
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/VectorSlicerExample.scala” in the Spark repo.
RFormula
RFormula selects columns specified by an R model formula. Currently we support a limited subset of the R operators, including ‘~’, ‘.’, ‘:’, ‘+’, and ‘-‘. The basic operators are:
? ~ separate target and terms
? + concat terms, “+ 0” means removing intercept
? - remove a term, “- 1” means removing intercept
? : interaction (multiplication for numeric values, or binarized categorical values)
? . all columns except target
Suppose a and b are double columns, we use the following simple examples to illustrate the effect of RFormula:
? y ~ a + b means model y ~ w0 + w1 * a + w2 * b where w0 is the intercept and w1, w2 are coefficients.
? y ~ a + b + a:b - 1 means model y ~ w1 * a + w2 * b + w3 * a * b where w1, w2, w3 are coefficients.
RFormula produces a vector column of features and a double or string column of label. Like when formulas are used in R for linear regression, numeric columns will be cast to doubles. As to string input columns, they will first be transformed with StringIndexer using ordering determined by stringOrderType, and the last category after ordering is dropped, then the doubles will be one-hot encoded.
Suppose a string feature column containing values {‘b’, ‘a’, ‘b’, ‘a’, ‘c’, ‘b’}, we set stringOrderType to control the encoding:

stringOrderTypeCategory mapped to 0 by StringIndexerCategory dropped by RFormula
‘frequencyDesc’most frequent category (‘b’)least frequent category (‘c’)
‘frequencyAsc’least frequent category (‘c’)most frequent category (‘b’)
‘alphabetDesc’last alphabetical category (‘c’)first alphabetical category (‘a’)
‘alphabetAsc’first alphabetical category (‘a’)last alphabetical category (‘c’)

If the label column is of type string, it will be first transformed to double with StringIndexer using frequencyDesc ordering. If the label column does not exist in the DataFrame, the output label column will be created from the specified response variable in the formula.
Note: The ordering option stringOrderType is NOT used for the label column. When the label column is indexed, it uses the default descending frequency ordering in StringIndexer.
Examples
Assume that we have a DataFrame with the columns id, country, hour, and clicked:

idcountryhourclicked
7“US”181.0
8“CA”120.0
9“NZ”150.0

If we use RFormula with a formula string of clicked ~ country + hour, which indicates that we want to predict clicked based on country and hour, after transformation we should get the following DataFrame:

idcountryhourclickedfeatureslabel
7“US”181.0[0.0, 0.0, 18.0]1.0
8“CA”120.0[0.0, 1.0, 12.0]0.0
9“NZ”150.0[1.0, 0.0, 15.0]0.0

? Scala
? Java
? Python
Refer to the RFormula Scala docs for more details on the API.
import org.apache.spark.ml.feature.RFormula

val dataset = spark.createDataFrame(Seq(
(7, “US”, 18, 1.0),
(8, “CA”, 12, 0.0),
(9, “NZ”, 15, 0.0)
)).toDF(“id”, “country”, “hour”, “clicked”)

val formula = new RFormula()
.setFormula(“clicked ~ country + hour”)
.setFeaturesCol(“features”)
.setLabelCol(“label”)

val output = formula.fit(dataset).transform(dataset)
output.select(“features”, “label”).show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/RFormulaExample.scala” in the Spark repo.
ChiSqSelector
ChiSqSelector stands for Chi-Squared feature selection. It operates on labeled data with categorical features. ChiSqSelector uses the Chi-Squared test of independence to decide which features to choose. It supports five selection methods: numTopFeatures, percentile, fpr, fdr, fwe:
? numTopFeatures chooses a fixed number of top features according to a chi-squared test. This is akin to yielding the features with the most predictive power.
? percentile is similar to numTopFeatures but chooses a fraction of all features instead of a fixed number.
? fpr chooses all features whose p-values are below a threshold, thus controlling the false positive rate of selection.
? fdr uses the Benjamini-Hochberg procedure to choose all features whose false discovery rate is below a threshold.
? fwe chooses all features whose p-values are below a threshold. The threshold is scaled by 1/numFeatures, thus controlling the family-wise error rate of selection. By default, the selection method is numTopFeatures, with the default number of top features set to 50. The user can choose a selection method using setSelectorType.
Examples
Assume that we have a DataFrame with the columns id, features, and clicked, which is used as our target to be predicted:

idfeaturesclicked
7[0.0, 0.0, 18.0, 1.0]1.0
8[0.0, 1.0, 12.0, 0.0]0.0
9[1.0, 0.0, 15.0, 0.1]0.0

If we use ChiSqSelector with numTopFeatures = 1, then according to our label clicked the last column in our features is chosen as the most useful feature:

idfeaturesclickedselectedFeatures
7[0.0, 0.0, 18.0, 1.0]1.0[1.0]
8[0.0, 1.0, 12.0, 0.0]0.0[0.0]
9[1.0, 0.0, 15.0, 0.1]0.0[0.1]

? Scala
? Java
? Python
Refer to the ChiSqSelector Scala docs for more details on the API.
import org.apache.spark.ml.feature.ChiSqSelector
import org.apache.spark.ml.linalg.Vectors

val data = Seq(
(7, Vectors.dense(0.0, 0.0, 18.0, 1.0), 1.0),
(8, Vectors.dense(0.0, 1.0, 12.0, 0.0), 0.0),
(9, Vectors.dense(1.0, 0.0, 15.0, 0.1), 0.0)
)

val df = spark.createDataset(data).toDF(“id”, “features”, “clicked”)

val selector = new ChiSqSelector()
.setNumTopFeatures(1)
.setFeaturesCol(“features”)
.setLabelCol(“clicked”)
.setOutputCol(“selectedFeatures”)

val result = selector.fit(df).transform(df)

println(s"ChiSqSelector output with top ${selector.getNumTopFeatures} features selected")
result.show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/ChiSqSelectorExample.scala” in the Spark repo.
UnivariateFeatureSelector
UnivariateFeatureSelector operates on categorical/continuous labels with categorical/continuous features. User can set featureType and labelType, and Spark will pick the score function to use based on the specified featureType and labelType.

featureTypelabelTypescore function
categoricalcategoricalchi-squared (chi2)
continuouscategoricalANOVATest (f_classif)
continuouscontinuousF-value (f_regression)

It supports five selection modes: numTopFeatures, percentile, fpr, fdr, fwe:
? numTopFeatures chooses a fixed number of top features.
? percentile is similar to numTopFeatures but chooses a fraction of all features instead of a fixed number.
? fpr chooses all features whose p-values are below a threshold, thus controlling the false positive rate of selection.
? fdr uses the Benjamini-Hochberg procedure to choose all features whose false discovery rate is below a threshold.
? fwe chooses all features whose p-values are below a threshold. The threshold is scaled by 1/numFeatures, thus controlling the family-wise error rate of selection.
By default, the selection mode is numTopFeatures, with the default selectionThreshold sets to 50.
Examples
Assume that we have a DataFrame with the columns id, features, and label, which is used as our target to be predicted:

idfeatureslabel
1[1.7, 4.4, 7.6, 5.8, 9.6, 2.3]3.0
2[8.8, 7.3, 5.7, 7.3, 2.2, 4.1]2.0
3[1.2, 9.5, 2.5, 3.1, 8.7, 2.5]3.0
4[3.7, 9.2, 6.1, 4.1, 7.5, 3.8]2.0
5[8.9, 5.2, 7.8, 8.3, 5.2, 3.0]4.0
6[7.9, 8.5, 9.2, 4.0, 9.4, 2.1]4.0

If we set featureType to continuous and labelType to categorical with numTopFeatures = 1, the last column in our features is chosen as the most useful feature:

idfeatureslabelselectedFeatures
1[1.7, 4.4, 7.6, 5.8, 9.6, 2.3]3.0[2.3]
2[8.8, 7.3, 5.7, 7.3, 2.2, 4.1]2.0[4.1]
3[1.2, 9.5, 2.5, 3.1, 8.7, 2.5]3.0[2.5]
4[3.7, 9.2, 6.1, 4.1, 7.5, 3.8]2.0[3.8]
5[8.9, 5.2, 7.8, 8.3, 5.2, 3.0]4.0[3.0]
6[7.9, 8.5, 9.2, 4.0, 9.4, 2.1]4.0[2.1]

? Scala
? Java
? Python
Refer to the UnivariateFeatureSelector Scala docs for more details on the API.
import org.apache.spark.ml.feature.UnivariateFeatureSelector
import org.apache.spark.ml.linalg.Vectors

val data = Seq(
(1, Vectors.dense(1.7, 4.4, 7.6, 5.8, 9.6, 2.3), 3.0),
(2, Vectors.dense(8.8, 7.3, 5.7, 7.3, 2.2, 4.1), 2.0),
(3, Vectors.dense(1.2, 9.5, 2.5, 3.1, 8.7, 2.5), 3.0),
(4, Vectors.dense(3.7, 9.2, 6.1, 4.1, 7.5, 3.8), 2.0),
(5, Vectors.dense(8.9, 5.2, 7.8, 8.3, 5.2, 3.0), 4.0),
(6, Vectors.dense(7.9, 8.5, 9.2, 4.0, 9.4, 2.1), 4.0)
)

val df = spark.createDataset(data).toDF(“id”, “features”, “label”)

val selector = new UnivariateFeatureSelector()
.setFeatureType(“continuous”)
.setLabelType(“categorical”)
.setSelectionMode(“numTopFeatures”)
.setSelectionThreshold(1)
.setFeaturesCol(“features”)
.setLabelCol(“label”)
.setOutputCol(“selectedFeatures”)

val result = selector.fit(df).transform(df)

println(s"UnivariateFeatureSelector output with top ${selector.getSelectionThreshold}" +
s" features selected using f_classif")
result.show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/UnivariateFeatureSelectorExample.scala” in the Spark repo.
VarianceThresholdSelector
VarianceThresholdSelector is a selector that removes low-variance features. Features with a variance not greater than the varianceThreshold will be removed. If not set, varianceThreshold defaults to 0, which means only features with variance 0 (i.e. features that have the same value in all samples) will be removed.
Examples
Assume that we have a DataFrame with the columns id and features, which is used as our target to be predicted:

idfeatures
1[6.0, 7.0, 0.0, 7.0, 6.0, 0.0]
2[0.0, 9.0, 6.0, 0.0, 5.0, 9.0]
3[0.0, 9.0, 3.0, 0.0, 5.0, 5.0]
4[0.0, 9.0, 8.0, 5.0, 6.0, 4.0]
5[8.0, 9.0, 6.0, 5.0, 4.0, 4.0]
6[8.0, 9.0, 6.0, 0.0, 0.0, 0.0]

The variance for the 6 features are 16.67, 0.67, 8.17, 10.17, 5.07, and 11.47 respectively. If we use VarianceThresholdSelector with varianceThreshold = 8.0, then the features with variance <= 8.0 are removed:

idfeaturesselectedFeatures
1[6.0, 7.0, 0.0, 7.0, 6.0, 0.0][6.0,0.0,7.0,0.0]
2[0.0, 9.0, 6.0, 0.0, 5.0, 9.0][0.0,6.0,0.0,9.0]
3[0.0, 9.0, 3.0, 0.0, 5.0, 5.0][0.0,3.0,0.0,5.0]
4[0.0, 9.0, 8.0, 5.0, 6.0, 4.0][0.0,8.0,5.0,4.0]
5[8.0, 9.0, 6.0, 5.0, 4.0, 4.0][8.0,6.0,5.0,4.0]
6[8.0, 9.0, 6.0, 0.0, 0.0, 0.0][8.0,6.0,0.0,0.0]

? Scala
? Java
? Python
Refer to the VarianceThresholdSelector Scala docs for more details on the API.
import org.apache.spark.ml.feature.VarianceThresholdSelector
import org.apache.spark.ml.linalg.Vectors

val data = Seq(
(1, Vectors.dense(6.0, 7.0, 0.0, 7.0, 6.0, 0.0)),
(2, Vectors.dense(0.0, 9.0, 6.0, 0.0, 5.0, 9.0)),
(3, Vectors.dense(0.0, 9.0, 3.0, 0.0, 5.0, 5.0)),
(4, Vectors.dense(0.0, 9.0, 8.0, 5.0, 6.0, 4.0)),
(5, Vectors.dense(8.0, 9.0, 6.0, 5.0, 4.0, 4.0)),
(6, Vectors.dense(8.0, 9.0, 6.0, 0.0, 0.0, 0.0))
)

val df = spark.createDataset(data).toDF(“id”, “features”)

val selector = new VarianceThresholdSelector()
.setVarianceThreshold(8.0)
.setFeaturesCol(“features”)
.setOutputCol(“selectedFeatures”)

val result = selector.fit(df).transform(df)

println(s"Output: Features with variance lower than" +
s" ${selector.getVarianceThreshold} are removed.")
result.show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/VarianceThresholdSelectorExample.scala” in the Spark repo.
Locality Sensitive Hashing
Locality Sensitive Hashing (LSH) is an important class of hashing techniques, which is commonly used in clustering, approximate nearest neighbor search and outlier detection with large datasets.
The general idea of LSH is to use a family of functions (“LSH families”) to hash data points into buckets, so that the data points which are close to each other are in the same buckets with high probability, while data points that are far away from each other are very likely in different buckets. An LSH family is formally defined as follows.
In a metric space (M, d), where M is a set and d is a distance function on M, an LSH family is a family of functions h that satisfy the following properties:
?p,q∈M,d(p,q)≤r1?Pr(h§=h(q))≥p1d(p,q)≥r2?Pr(h§=h(q))≤p2?p,q∈M,d(p,q)≤r1?Pr(h§=h(q))≥p1d(p,q)≥r2?Pr(h§=h(q))≤p2
This LSH family is called (r1, r2, p1, p2)-sensitive.
In Spark, different LSH families are implemented in separate classes (e.g., MinHash), and APIs for feature transformation, approximate similarity join and approximate nearest neighbor are provided in each class.
In LSH, we define a false positive as a pair of distant input features (with d(p,q)≥r2d(p,q)≥r2) which are hashed into the same bucket, and we define a false negative as a pair of nearby features (with d(p,q)≤r1d(p,q)≤r1) which are hashed into different buckets.
LSH Operations
We describe the major types of operations which LSH can be used for. A fitted LSH model has methods for each of these operations.
Feature Transformation
Feature transformation is the basic functionality to add hashed values as a new column. This can be useful for dimensionality reduction. Users can specify input and output column names by setting inputCol and outputCol.
LSH also supports multiple LSH hash tables. Users can specify the number of hash tables by setting numHashTables. This is also used for OR-amplification in approximate similarity join and approximate nearest neighbor. Increasing the number of hash tables will increase the accuracy but will also increase communication cost and running time.
The type of outputCol is Seq[Vector] where the dimension of the array equals numHashTables, and the dimensions of the vectors are currently set to 1. In future releases, we will implement AND-amplification so that users can specify the dimensions of these vectors.
Approximate Similarity Join
Approximate similarity join takes two datasets and approximately returns pairs of rows in the datasets whose distance is smaller than a user-defined threshold. Approximate similarity join supports both joining two different datasets and self-joining. Self-joining will produce some duplicate pairs.
Approximate similarity join accepts both transformed and untransformed datasets as input. If an untransformed dataset is used, it will be transformed automatically. In this case, the hash signature will be created as outputCol.
In the joined dataset, the origin datasets can be queried in datasetA and datasetB. A distance column will be added to the output dataset to show the true distance between each pair of rows returned.
Approximate Nearest Neighbor Search
Approximate nearest neighbor search takes a dataset (of feature vectors) and a key (a single feature vector), and it approximately returns a specified number of rows in the dataset that are closest to the vector.
Approximate nearest neighbor search accepts both transformed and untransformed datasets as input. If an untransformed dataset is used, it will be transformed automatically. In this case, the hash signature will be created as outputCol.
A distance column will be added to the output dataset to show the true distance between each output row and the searched key.
Note: Approximate nearest neighbor search will return fewer than k rows when there are not enough candidates in the hash bucket.
LSH Algorithms
Bucketed Random Projection for Euclidean Distance
Bucketed Random Projection is an LSH family for Euclidean distance. The Euclidean distance is defined as follows:
d(x,y)=∑i(xi?yi)2??????????√d(x,y)=∑i(xi?yi)2
Its LSH family projects feature vectors xx onto a random unit vector vv and portions the projected results into hash buckets:
h(x)=?x?vr?h(x)=?x?vr?
where r is a user-defined bucket length. The bucket length can be used to control the average size of hash buckets (and thus the number of buckets). A larger bucket length (i.e., fewer buckets) increases the probability of features being hashed to the same bucket (increasing the numbers of true and false positives).
Bucketed Random Projection accepts arbitrary vectors as input features, and supports both sparse and dense vectors.
? Scala
? Java
? Python
Refer to the BucketedRandomProjectionLSH Scala docs for more details on the API.
import org.apache.spark.ml.feature.BucketedRandomProjectionLSH
import org.apache.spark.ml.linalg.Vectors
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.functions.col

val dfA = spark.createDataFrame(Seq(
(0, Vectors.dense(1.0, 1.0)),
(1, Vectors.dense(1.0, -1.0)),
(2, Vectors.dense(-1.0, -1.0)),
(3, Vectors.dense(-1.0, 1.0))
)).toDF(“id”, “features”)

val dfB = spark.createDataFrame(Seq(
(4, Vectors.dense(1.0, 0.0)),
(5, Vectors.dense(-1.0, 0.0)),
(6, Vectors.dense(0.0, 1.0)),
(7, Vectors.dense(0.0, -1.0))
)).toDF(“id”, “features”)

val key = Vectors.dense(1.0, 0.0)

val brp = new BucketedRandomProjectionLSH()
.setBucketLength(2.0)
.setNumHashTables(3)
.setInputCol(“features”)
.setOutputCol(“hashes”)

val model = brp.fit(dfA)

// Feature Transformation
println(“The hashed dataset where hashed values are stored in the column ‘hashes’:”)
model.transform(dfA).show()

// Compute the locality sensitive hashes for the input rows, then perform approximate
// similarity join.
// We could avoid computing hashes by passing in the already-transformed dataset, e.g.
// model.approxSimilarityJoin(transformedA, transformedB, 1.5)
println(“Approximately joining dfA and dfB on Euclidean distance smaller than 1.5:”)
model.approxSimilarityJoin(dfA, dfB, 1.5, “EuclideanDistance”)
.select(col(“datasetA.id”).alias(“idA”),
col(“datasetB.id”).alias(“idB”),
col(“EuclideanDistance”)).show()

// Compute the locality sensitive hashes for the input rows, then perform approximate nearest
// neighbor search.
// We could avoid computing hashes by passing in the already-transformed dataset, e.g.
// model.approxNearestNeighbors(transformedA, key, 2)
println(“Approximately searching dfA for 2 nearest neighbors of the key:”)
model.approxNearestNeighbors(dfA, key, 2).show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/BucketedRandomProjectionLSHExample.scala” in the Spark repo.
MinHash for Jaccard Distance
MinHash is an LSH family for Jaccard distance where input features are sets of natural numbers. Jaccard distance of two sets is defined by the cardinality of their intersection and union:
d(A,B)=1?|A∩B||A∪B|d(A,B)=1?|A∩B||A∪B|
MinHash applies a random hash function g to each element in the set and take the minimum of all hashed values:
h(A)=mina∈A(g(a))h(A)=mina∈A(g(a))
The input sets for MinHash are represented as binary vectors, where the vector indices represent the elements themselves and the non-zero values in the vector represent the presence of that element in the set. While both dense and sparse vectors are supported, typically sparse vectors are recommended for efficiency. For example, Vectors.sparse(10, Array[(2, 1.0), (3, 1.0), (5, 1.0)]) means there are 10 elements in the space. This set contains elem 2, elem 3 and elem 5. All non-zero values are treated as binary “1” values.
Note: Empty sets cannot be transformed by MinHash, which means any input vector must have at least 1 non-zero entry.
? Scala
? Java
? Python
Refer to the MinHashLSH Scala docs for more details on the API.
import org.apache.spark.ml.feature.MinHashLSH
import org.apache.spark.ml.linalg.Vectors
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.functions.col

val dfA = spark.createDataFrame(Seq(
(0, Vectors.sparse(6, Seq((0, 1.0), (1, 1.0), (2, 1.0)))),
(1, Vectors.sparse(6, Seq((2, 1.0), (3, 1.0), (4, 1.0)))),
(2, Vectors.sparse(6, Seq((0, 1.0), (2, 1.0), (4, 1.0))))
)).toDF(“id”, “features”)

val dfB = spark.createDataFrame(Seq(
(3, Vectors.sparse(6, Seq((1, 1.0), (3, 1.0), (5, 1.0)))),
(4, Vectors.sparse(6, Seq((2, 1.0), (3, 1.0), (5, 1.0)))),
(5, Vectors.sparse(6, Seq((1, 1.0), (2, 1.0), (4, 1.0))))
)).toDF(“id”, “features”)

val key = Vectors.sparse(6, Seq((1, 1.0), (3, 1.0)))

val mh = new MinHashLSH()
.setNumHashTables(5)
.setInputCol(“features”)
.setOutputCol(“hashes”)

val model = mh.fit(dfA)

// Feature Transformation
println(“The hashed dataset where hashed values are stored in the column ‘hashes’:”)
model.transform(dfA).show()

// Compute the locality sensitive hashes for the input rows, then perform approximate
// similarity join.
// We could avoid computing hashes by passing in the already-transformed dataset, e.g.
// model.approxSimilarityJoin(transformedA, transformedB, 0.6)
println(“Approximately joining dfA and dfB on Jaccard distance smaller than 0.6:”)
model.approxSimilarityJoin(dfA, dfB, 0.6, “JaccardDistance”)
.select(col(“datasetA.id”).alias(“idA”),
col(“datasetB.id”).alias(“idB”),
col(“JaccardDistance”)).show()

// Compute the locality sensitive hashes for the input rows, then perform approximate nearest
// neighbor search.
// We could avoid computing hashes by passing in the already-transformed dataset, e.g.
// model.approxNearestNeighbors(transformedA, key, 2)
// It may return less than 2 rows when not enough approximate near-neighbor candidates are
// found.
println(“Approximately searching dfA for 2 nearest neighbors of the key:”)
model.approxNearestNeighbors(dfA, key, 2).show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/MinHashLSHExample.scala” in the Spark repo.

總結

以上是生活随笔為你收集整理的特征提取,转换和选择的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

国产亚洲日韩欧美另类第八页 | 国产香蕉尹人视频在线 | 国产av无码专区亚洲awww | 亚洲精品成a人在线观看 | 又色又爽又黄的美女裸体网站 | 国产9 9在线 | 中文 | 少妇人妻av毛片在线看 | 久久国产劲爆∧v内射 | 亚洲伊人久久精品影院 | 大乳丰满人妻中文字幕日本 | 成人无码精品一区二区三区 | 精品欧洲av无码一区二区三区 | 日韩在线不卡免费视频一区 | 狠狠色噜噜狠狠狠狠7777米奇 | 国产精品人人妻人人爽 | 久久久久成人片免费观看蜜芽 | 天天av天天av天天透 | 国产精品鲁鲁鲁 | 伊人久久大香线焦av综合影院 | 男女下面进入的视频免费午夜 | 国产乱码精品一品二品 | 日本乱偷人妻中文字幕 | 久久国产精品精品国产色婷婷 | 久久久久久久人妻无码中文字幕爆 | 亚洲性无码av中文字幕 | 日本肉体xxxx裸交 | 久久亚洲精品中文字幕无男同 | 亚洲国产成人av在线观看 | 黑人大群体交免费视频 | 亚洲欧美精品伊人久久 | 国产卡一卡二卡三 | 久久熟妇人妻午夜寂寞影院 | 成人aaa片一区国产精品 | 欧美人与牲动交xxxx | 亚洲国产精品毛片av不卡在线 | 亚洲乱码中文字幕在线 | 亚洲色欲色欲欲www在线 | 丰满肥臀大屁股熟妇激情视频 | 一本大道久久东京热无码av | 久久久国产精品无码免费专区 | 水蜜桃亚洲一二三四在线 | 日韩精品无码一本二本三本色 | 人人妻人人澡人人爽欧美一区九九 | 亚洲高清偷拍一区二区三区 | 久久99久久99精品中文字幕 | 永久免费观看美女裸体的网站 | 国产精品99爱免费视频 | 欧美阿v高清资源不卡在线播放 | 中文字幕乱妇无码av在线 | 性欧美大战久久久久久久 | 内射老妇bbwx0c0ck | 人人妻人人澡人人爽人人精品 | 思思久久99热只有频精品66 | 成年美女黄网站色大免费视频 | 精品国产一区av天美传媒 | 永久免费观看美女裸体的网站 | 亚洲精品一区二区三区在线观看 | 国产精品亚洲а∨无码播放麻豆 | 欧美丰满少妇xxxx性 | 国产香蕉97碰碰久久人人 | 国产免费观看黄av片 | 亚洲国产精品无码久久久久高潮 | 窝窝午夜理论片影院 | 中文字幕乱码中文乱码51精品 | 亚洲精品成人av在线 | 久久aⅴ免费观看 | 国产激情无码一区二区 | 无码人妻丰满熟妇区五十路百度 | 老熟女重囗味hdxx69 | 又湿又紧又大又爽a视频国产 | 亚无码乱人伦一区二区 | 亚洲精品中文字幕久久久久 | 老熟女乱子伦 | 国产精品久久久久久久9999 | 国产一区二区三区精品视频 | 2019nv天堂香蕉在线观看 | 精品人人妻人人澡人人爽人人 | 国产精品久久久久7777 | 乱人伦中文视频在线观看 | 欧美激情内射喷水高潮 | 精品国产乱码久久久久乱码 | 少妇性l交大片欧洲热妇乱xxx | 免费观看激色视频网站 | 高潮喷水的毛片 | 无码人妻少妇伦在线电影 | 无码人妻出轨黑人中文字幕 | 一个人看的视频www在线 | 天天摸天天碰天天添 | 国产卡一卡二卡三 | 丰满少妇女裸体bbw | 久久亚洲日韩精品一区二区三区 | 亚洲自偷自偷在线制服 | 人人爽人人澡人人高潮 | 中文字幕乱码中文乱码51精品 | 日产精品99久久久久久 | 免费国产成人高清在线观看网站 | 亚洲国产高清在线观看视频 | 综合人妻久久一区二区精品 | 无码乱肉视频免费大全合集 | 亚洲国精产品一二二线 | 久久亚洲中文字幕精品一区 | 性色欲情网站iwww九文堂 | 丰满人妻翻云覆雨呻吟视频 | 日韩欧美中文字幕在线三区 | 日韩精品成人一区二区三区 | 国产日产欧产精品精品app | 夜夜躁日日躁狠狠久久av | 久久久久成人精品免费播放动漫 | 亚洲国产精品无码一区二区三区 | 国产亚洲精品精品国产亚洲综合 | 日韩精品无码免费一区二区三区 | 国产亚av手机在线观看 | 国产色视频一区二区三区 | 成人三级无码视频在线观看 | 性做久久久久久久免费看 | 偷窥村妇洗澡毛毛多 | 国产精品人妻一区二区三区四 | www国产亚洲精品久久久日本 | 好屌草这里只有精品 | 无码人妻精品一区二区三区下载 | 中文字幕无码av激情不卡 | 亚洲精品鲁一鲁一区二区三区 | 久久天天躁夜夜躁狠狠 | 性色av无码免费一区二区三区 | 欧美精品在线观看 | 亚洲精品鲁一鲁一区二区三区 | 人妻aⅴ无码一区二区三区 | 永久免费精品精品永久-夜色 | 国产午夜亚洲精品不卡 | 国产成人午夜福利在线播放 | 十八禁真人啪啪免费网站 | 欧美日韩久久久精品a片 | 高潮毛片无遮挡高清免费视频 | 日本一区二区三区免费播放 | 国産精品久久久久久久 | 人人超人人超碰超国产 | 强开小婷嫩苞又嫩又紧视频 | 欧美成人家庭影院 | 国产精品国产自线拍免费软件 | 狂野欧美性猛交免费视频 | 亚洲va中文字幕无码久久不卡 | 国产美女精品一区二区三区 | 婷婷丁香五月天综合东京热 | 亚洲 另类 在线 欧美 制服 | 亚洲国产精华液网站w | 欧美黑人巨大xxxxx | a在线观看免费网站大全 | 亚洲精品www久久久 | 好男人www社区 | 少妇被黑人到高潮喷出白浆 | 少妇性荡欲午夜性开放视频剧场 | 男人扒开女人内裤强吻桶进去 | 国产亚洲精品久久久闺蜜 | 色综合久久久无码中文字幕 | 欧美老妇与禽交 | 免费网站看v片在线18禁无码 | 无码人妻少妇伦在线电影 | 日本高清一区免费中文视频 | 老头边吃奶边弄进去呻吟 | 国产高潮视频在线观看 | 日韩亚洲欧美精品综合 | 国产成人一区二区三区在线观看 | 漂亮人妻洗澡被公强 日日躁 | 亚洲成色www久久网站 | 少妇无码av无码专区在线观看 | 永久黄网站色视频免费直播 | 国产艳妇av在线观看果冻传媒 | 国产精品办公室沙发 | 亚洲成av人在线观看网址 | 亚洲欧美日韩国产精品一区二区 | 大屁股大乳丰满人妻 | 成人影院yy111111在线观看 | 蜜臀aⅴ国产精品久久久国产老师 | 国产精品99爱免费视频 | 一二三四社区在线中文视频 | 亚洲欧美国产精品专区久久 | 无遮无挡爽爽免费视频 | 成人精品天堂一区二区三区 | 精品国产成人一区二区三区 | 色婷婷av一区二区三区之红樱桃 | 国产va免费精品观看 | 伊人久久大香线蕉av一区二区 | 麻豆md0077饥渴少妇 | 国产偷国产偷精品高清尤物 | 国内精品人妻无码久久久影院 | 性生交大片免费看女人按摩摩 | 老司机亚洲精品影院 | 激情亚洲一区国产精品 | 国产无遮挡吃胸膜奶免费看 | 亚洲人成无码网www | 国产精品欧美成人 | 日本乱偷人妻中文字幕 | 日本在线高清不卡免费播放 | 国产美女精品一区二区三区 | 99久久亚洲精品无码毛片 | aⅴ在线视频男人的天堂 | 久久久www成人免费毛片 | 欧美精品无码一区二区三区 | 2020久久香蕉国产线看观看 | 亚洲成a人片在线观看无码3d | 久久久久久a亚洲欧洲av冫 | 国产性生大片免费观看性 | 国产成人无码区免费内射一片色欲 | 欧美国产日韩久久mv | 国产香蕉尹人综合在线观看 | 丰满妇女强制高潮18xxxx | 中文字幕+乱码+中文字幕一区 | 激情国产av做激情国产爱 | 国产特级毛片aaaaaaa高清 | 97久久超碰中文字幕 | 无码人妻丰满熟妇区毛片18 | 久久久久久国产精品无码下载 | 久久人人97超碰a片精品 | 精品欧洲av无码一区二区三区 | 色综合久久久无码中文字幕 | 亚洲国产精华液网站w | 妺妺窝人体色www在线小说 | 国产成人av免费观看 | 日日天日日夜日日摸 | 久久综合网欧美色妞网 | 欧美精品免费观看二区 | 中文无码伦av中文字幕 | 午夜精品久久久内射近拍高清 | 婷婷五月综合缴情在线视频 | 成人片黄网站色大片免费观看 | 国产av一区二区三区最新精品 | 亚洲中文字幕av在天堂 | 鲁一鲁av2019在线 | 国产乱人伦av在线无码 | 天天摸天天碰天天添 | 中文无码成人免费视频在线观看 | 亚洲男人av天堂午夜在 | 久久www免费人成人片 | 成 人 网 站国产免费观看 | 久久国产精品_国产精品 | 亚洲一区二区三区偷拍女厕 | 国产成人精品三级麻豆 | 国产莉萝无码av在线播放 | 少妇性荡欲午夜性开放视频剧场 | 色窝窝无码一区二区三区色欲 | 青青久在线视频免费观看 | 亚洲综合在线一区二区三区 | 久久综合给久久狠狠97色 | 国产超碰人人爽人人做人人添 | 精品久久久无码中文字幕 | 成年美女黄网站色大免费视频 | 少妇的肉体aa片免费 | 亚洲精品国产a久久久久久 | 中文字幕乱码亚洲无线三区 | 欧美xxxx黑人又粗又长 | 亚洲gv猛男gv无码男同 | 日本va欧美va欧美va精品 | √8天堂资源地址中文在线 | 一本大道伊人av久久综合 | 东京热男人av天堂 | 亚洲精品www久久久 | 国产精品嫩草久久久久 | 欧美喷潮久久久xxxxx | 色一情一乱一伦 | 极品尤物被啪到呻吟喷水 | 国产免费无码一区二区视频 | 亚洲精品久久久久avwww潮水 | 熟妇人妻无乱码中文字幕 | 永久免费观看美女裸体的网站 | 色欲av亚洲一区无码少妇 | 久久99久久99精品中文字幕 | 中文精品无码中文字幕无码专区 | 国内精品人妻无码久久久影院蜜桃 | 人妻插b视频一区二区三区 | 成人女人看片免费视频放人 | aⅴ亚洲 日韩 色 图网站 播放 | 亚洲人亚洲人成电影网站色 | 东京热男人av天堂 | 精品国产福利一区二区 | 国产精品第一国产精品 | 国産精品久久久久久久 | 国产熟妇高潮叫床视频播放 | 亚洲国产高清在线观看视频 | 未满小14洗澡无码视频网站 | 欧美放荡的少妇 | а√天堂www在线天堂小说 | 欧美 丝袜 自拍 制服 另类 | 熟女体下毛毛黑森林 | 人人澡人人妻人人爽人人蜜桃 | 国内精品一区二区三区不卡 | 狠狠噜狠狠狠狠丁香五月 | 国产精品无码成人午夜电影 | 成人一在线视频日韩国产 | 人妻少妇被猛烈进入中文字幕 | 熟女俱乐部五十路六十路av | 狠狠色噜噜狠狠狠狠7777米奇 | 无码午夜成人1000部免费视频 | 久久www免费人成人片 | 人人妻人人澡人人爽欧美精品 | 在线播放无码字幕亚洲 | 国产欧美亚洲精品a | 九九久久精品国产免费看小说 | 久久午夜无码鲁丝片 | 中文字幕乱码人妻二区三区 | 欧美日本免费一区二区三区 | 亚洲色大成网站www国产 | 天堂在线观看www | 成人免费无码大片a毛片 | 久久综合激激的五月天 | 小鲜肉自慰网站xnxx | 成熟女人特级毛片www免费 | 双乳奶水饱满少妇呻吟 | 久久精品99久久香蕉国产色戒 | 男女爱爱好爽视频免费看 | 亚洲区小说区激情区图片区 | 久久精品女人天堂av免费观看 | 日本精品人妻无码免费大全 | 国内精品人妻无码久久久影院蜜桃 | 午夜肉伦伦影院 | 帮老师解开蕾丝奶罩吸乳网站 | 国产人妖乱国产精品人妖 | 四十如虎的丰满熟妇啪啪 | 高中生自慰www网站 | 亚洲一区二区三区无码久久 | 国产精品亚洲一区二区三区喷水 | 亚洲日韩av片在线观看 | аⅴ资源天堂资源库在线 | 狠狠亚洲超碰狼人久久 | 大胆欧美熟妇xx | 国产片av国语在线观看 | 高清不卡一区二区三区 | 亚洲а∨天堂久久精品2021 | 无人区乱码一区二区三区 | 午夜理论片yy44880影院 | 日韩精品乱码av一区二区 | 亚洲伊人久久精品影院 | 成人女人看片免费视频放人 | 欧洲熟妇精品视频 | 久精品国产欧美亚洲色aⅴ大片 | 国产成人精品视频ⅴa片软件竹菊 | 红桃av一区二区三区在线无码av | 国产av无码专区亚洲a∨毛片 | 日韩精品成人一区二区三区 | 成年美女黄网站色大免费视频 | 少妇厨房愉情理9仑片视频 | 久久久www成人免费毛片 | 国产激情精品一区二区三区 | 97精品人妻一区二区三区香蕉 | 日韩亚洲欧美精品综合 | 55夜色66夜色国产精品视频 | 成人欧美一区二区三区黑人 | 在线天堂新版最新版在线8 | 国产va免费精品观看 | 中文无码伦av中文字幕 | 国产精品久久久久久久9999 | 国产在线一区二区三区四区五区 | 亚洲中文字幕无码一久久区 | 一本色道久久综合亚洲精品不卡 | 欧美日韩色另类综合 | 人妻夜夜爽天天爽三区 | 综合网日日天干夜夜久久 | 久久亚洲精品中文字幕无男同 | 久久视频在线观看精品 | 高清无码午夜福利视频 | 俺去俺来也在线www色官网 | 亚洲熟妇自偷自拍另类 | 婷婷五月综合缴情在线视频 | 国产亚洲欧美日韩亚洲中文色 | 成 人 网 站国产免费观看 | 内射巨臀欧美在线视频 | 久久人妻内射无码一区三区 | 亚洲色大成网站www国产 | 丰满护士巨好爽好大乳 | 丰满人妻一区二区三区免费视频 | 牛和人交xxxx欧美 | 国产综合色产在线精品 | 麻豆国产97在线 | 欧洲 | 国产精品亚洲专区无码不卡 | 蜜臀aⅴ国产精品久久久国产老师 | 九九热爱视频精品 | 无套内谢老熟女 | 精品偷自拍另类在线观看 | 久久久久亚洲精品中文字幕 | 亚洲中文字幕无码一久久区 | 国产真实乱对白精彩久久 | 老子影院午夜伦不卡 | 亚洲综合另类小说色区 | 丰满人妻被黑人猛烈进入 | 白嫩日本少妇做爰 | 国产成人午夜福利在线播放 | 欧美人妻一区二区三区 | 中文字幕人妻无码一区二区三区 | 日韩人妻少妇一区二区三区 | 成在人线av无码免费 | 亚洲日本va午夜在线电影 | 狂野欧美性猛xxxx乱大交 | 红桃av一区二区三区在线无码av | 亚洲毛片av日韩av无码 | 国产综合色产在线精品 | 国精品人妻无码一区二区三区蜜柚 | 成人女人看片免费视频放人 | 国产福利视频一区二区 | 久久久久国色av免费观看性色 | 亚洲理论电影在线观看 | 青青草原综合久久大伊人精品 | 久久久精品成人免费观看 | 99久久精品国产一区二区蜜芽 | 亚洲熟悉妇女xxx妇女av | 亚洲成a人片在线观看无码 | 亚洲欧美日韩国产精品一区二区 | 在线播放免费人成毛片乱码 | 国产精品无码永久免费888 | 国产人妻久久精品二区三区老狼 | 亲嘴扒胸摸屁股激烈网站 | 无遮挡国产高潮视频免费观看 | a片免费视频在线观看 | 人人妻人人澡人人爽人人精品浪潮 | 波多野结衣高清一区二区三区 | 色五月五月丁香亚洲综合网 | 女人色极品影院 | 国产乱人伦av在线无码 | 亚洲熟女一区二区三区 | 精品国产av色一区二区深夜久久 | 日本乱偷人妻中文字幕 | 亚洲精品一区二区三区在线 | 中文无码伦av中文字幕 | 亚洲国产成人a精品不卡在线 | 日日摸日日碰夜夜爽av | 狠狠亚洲超碰狼人久久 | 精品厕所偷拍各类美女tp嘘嘘 | 亚洲国产精品一区二区第一页 | 国产欧美亚洲精品a | 国内精品九九久久久精品 | 99精品视频在线观看免费 | 久久久久久国产精品无码下载 | 国产精品va在线播放 | 丰满少妇熟乱xxxxx视频 | 国产又爽又猛又粗的视频a片 | 亚洲啪av永久无码精品放毛片 | 亚洲阿v天堂在线 | 天天av天天av天天透 | 99精品国产综合久久久久五月天 | 精品无码成人片一区二区98 | 色诱久久久久综合网ywww | 5858s亚洲色大成网站www | 蜜臀aⅴ国产精品久久久国产老师 | 特大黑人娇小亚洲女 | 午夜熟女插插xx免费视频 | 国产成人一区二区三区在线观看 | 成 人 网 站国产免费观看 | 日韩少妇内射免费播放 | 亚洲va中文字幕无码久久不卡 | 最新版天堂资源中文官网 | 久久成人a毛片免费观看网站 | 人人妻人人澡人人爽欧美精品 | 一本大道伊人av久久综合 | 内射爽无广熟女亚洲 | 鲁鲁鲁爽爽爽在线视频观看 | 狠狠色噜噜狠狠狠7777奇米 | 免费看男女做好爽好硬视频 | 中国女人内谢69xxxxxa片 | 国产综合色产在线精品 | 乱码av麻豆丝袜熟女系列 | 丰满人妻精品国产99aⅴ | 久久国产精品精品国产色婷婷 | 欧美亚洲国产一区二区三区 | 99久久久无码国产精品免费 | 人妻少妇被猛烈进入中文字幕 | 欧美第一黄网免费网站 | 中文字幕无码日韩专区 | 成人无码精品一区二区三区 | 男女性色大片免费网站 | 精品国产一区av天美传媒 | 国精品人妻无码一区二区三区蜜柚 | 成人免费无码大片a毛片 | 国产人妻精品一区二区三区 | 国产麻豆精品一区二区三区v视界 | 激情内射日本一区二区三区 | 99久久精品无码一区二区毛片 | 日日摸日日碰夜夜爽av | 玩弄中年熟妇正在播放 | 免费无码午夜福利片69 | 久久人妻内射无码一区三区 | 亚洲第一网站男人都懂 | 欧美高清在线精品一区 | 色诱久久久久综合网ywww | 黑人巨大精品欧美一区二区 | 天天做天天爱天天爽综合网 | 内射爽无广熟女亚洲 | 波多野结衣乳巨码无在线观看 | 无码福利日韩神码福利片 | 久久这里只有精品视频9 | 无套内射视频囯产 | 三上悠亚人妻中文字幕在线 | 国产熟女一区二区三区四区五区 | 牲欲强的熟妇农村老妇女 | 亚洲色成人中文字幕网站 | 亚洲成av人综合在线观看 | 人妻少妇精品视频专区 | 精品国产国产综合精品 | 国产精品久久久久久无码 | 国色天香社区在线视频 | 亚洲无人区午夜福利码高清完整版 | 成 人 网 站国产免费观看 | 欧美精品无码一区二区三区 | 天堂а√在线地址中文在线 | 日本丰满护士爆乳xxxx | 成人精品天堂一区二区三区 | 超碰97人人做人人爱少妇 | 中文无码成人免费视频在线观看 | 亚洲欧美中文字幕5发布 | 国产成人综合在线女婷五月99播放 | 无码一区二区三区在线 | 99精品无人区乱码1区2区3区 | 国产精品鲁鲁鲁 | 成 人 网 站国产免费观看 | 亚洲国产精品美女久久久久 | 久久精品99久久香蕉国产色戒 | 99久久婷婷国产综合精品青草免费 | 久久久精品国产sm最大网站 | 乌克兰少妇性做爰 | 日产精品99久久久久久 | 日本丰满熟妇videos | 精品一区二区三区波多野结衣 | 国产精品二区一区二区aⅴ污介绍 | 精品久久久无码人妻字幂 | 亚洲成a人一区二区三区 | 18无码粉嫩小泬无套在线观看 | 免费乱码人妻系列无码专区 | 久久综合色之久久综合 | 婷婷丁香六月激情综合啪 | 99国产精品白浆在线观看免费 | a国产一区二区免费入口 | 夜夜躁日日躁狠狠久久av | 夜夜高潮次次欢爽av女 | 性欧美videos高清精品 | 国产午夜无码视频在线观看 | www成人国产高清内射 | 国产精品久久久一区二区三区 | 人人妻人人澡人人爽人人精品 | 人妻少妇精品视频专区 | 国产精品爱久久久久久久 | 亚洲天堂2017无码 | 又大又硬又爽免费视频 | 国产后入清纯学生妹 | 老熟女重囗味hdxx69 | 99久久精品日本一区二区免费 | 欧美国产日韩久久mv | 大乳丰满人妻中文字幕日本 | 中文字幕无码视频专区 | 麻豆国产97在线 | 欧洲 | 鲁一鲁av2019在线 | 成人亚洲精品久久久久软件 | 国产suv精品一区二区五 | 中文字幕无码人妻少妇免费 | 日韩精品无码一区二区中文字幕 | 伊人久久大香线焦av综合影院 | 精品国产一区二区三区av 性色 | 久久午夜夜伦鲁鲁片无码免费 | 人妻人人添人妻人人爱 | 久久综合九色综合97网 | 欧美一区二区三区视频在线观看 | 免费人成网站视频在线观看 | 18黄暴禁片在线观看 | 国产真实夫妇视频 | 天堂а√在线中文在线 | 国产成人精品一区二区在线小狼 | 少妇太爽了在线观看 | 捆绑白丝粉色jk震动捧喷白浆 | 久久国产36精品色熟妇 | 成人无码视频免费播放 | 欧美xxxxx精品 | 日韩精品无码一本二本三本色 | 日韩精品一区二区av在线 | 欧美大屁股xxxxhd黑色 | 国产高潮视频在线观看 | 亚洲熟妇色xxxxx欧美老妇 | 亚洲日韩精品欧美一区二区 | 亚洲一区av无码专区在线观看 | 18精品久久久无码午夜福利 | 玩弄中年熟妇正在播放 | 在线观看免费人成视频 | 高潮毛片无遮挡高清免费 | 中文亚洲成a人片在线观看 | 中文毛片无遮挡高清免费 | 日本一卡2卡3卡四卡精品网站 | 久久精品一区二区三区四区 | 婷婷五月综合激情中文字幕 | 99久久精品国产一区二区蜜芽 | 国产精品亚洲а∨无码播放麻豆 | 欧美刺激性大交 | 久久精品国产大片免费观看 | 中文字幕 亚洲精品 第1页 | 中国女人内谢69xxxxxa片 | 国产极品美女高潮无套在线观看 | 又色又爽又黄的美女裸体网站 | 欧美真人作爱免费视频 | 久久99精品久久久久久动态图 | 亚洲の无码国产の无码步美 | 亚洲精品www久久久 | 特大黑人娇小亚洲女 | 国产av无码专区亚洲awww | 欧洲熟妇色 欧美 | 中文字幕无码av波多野吉衣 | 强奷人妻日本中文字幕 | 欧美高清在线精品一区 | 亚洲日韩av一区二区三区四区 | 日本丰满熟妇videos | 婷婷色婷婷开心五月四房播播 | av人摸人人人澡人人超碰下载 | 成熟女人特级毛片www免费 | 中文字幕无线码免费人妻 | 国产成人精品三级麻豆 | 国产精品多人p群无码 | 中文字幕精品av一区二区五区 | 丝袜美腿亚洲一区二区 | 国产口爆吞精在线视频 | 香港三级日本三级妇三级 | 在教室伦流澡到高潮hnp视频 | 久久婷婷五月综合色国产香蕉 | 欧美老妇交乱视频在线观看 | 日韩av无码中文无码电影 | 中文字幕人妻无码一区二区三区 | 少女韩国电视剧在线观看完整 | 国产成人一区二区三区在线观看 | 亚洲中文无码av永久不收费 | 国精产品一区二区三区 | 领导边摸边吃奶边做爽在线观看 | 中文字幕av无码一区二区三区电影 | 精品人妻人人做人人爽夜夜爽 | 精品一二三区久久aaa片 | 精品欧洲av无码一区二区三区 | 99麻豆久久久国产精品免费 | √天堂中文官网8在线 | 亚洲欧美色中文字幕在线 | 欧美高清在线精品一区 | 欧美精品国产综合久久 | а√资源新版在线天堂 | 无码人妻丰满熟妇区五十路百度 | 精品人人妻人人澡人人爽人人 | 亚洲国产精品一区二区第一页 | 国产在线精品一区二区高清不卡 | 亚洲日本在线电影 | 国产色在线 | 国产 | 国产超级va在线观看视频 | 午夜福利一区二区三区在线观看 | 又大又紧又粉嫩18p少妇 | 人人澡人人妻人人爽人人蜜桃 | 99er热精品视频 | 亚洲精品午夜无码电影网 | 欧美国产亚洲日韩在线二区 | 成人免费视频视频在线观看 免费 | 午夜福利试看120秒体验区 | 无码人妻精品一区二区三区不卡 | 国产片av国语在线观看 | 2019午夜福利不卡片在线 | 精品无码一区二区三区爱欲 | 欧美变态另类xxxx | 无码人妻精品一区二区三区不卡 | 性欧美熟妇videofreesex | 77777熟女视频在线观看 а天堂中文在线官网 | 131美女爱做视频 | 久久精品成人欧美大片 | 国产乱人伦偷精品视频 | 久久精品人人做人人综合 | 67194成是人免费无码 | 精品久久久中文字幕人妻 | 国产午夜亚洲精品不卡 | 欧美国产日韩久久mv | 亚洲综合伊人久久大杳蕉 | 九月婷婷人人澡人人添人人爽 | 又大又紧又粉嫩18p少妇 | 久久无码人妻影院 | av在线亚洲欧洲日产一区二区 | 偷窥日本少妇撒尿chinese | 日本一区二区更新不卡 | 日韩精品a片一区二区三区妖精 | a在线观看免费网站大全 | 亚洲日本一区二区三区在线 | 小鲜肉自慰网站xnxx | 国产精品理论片在线观看 | 国产无遮挡又黄又爽免费视频 | 欧美亚洲国产一区二区三区 | 色五月五月丁香亚洲综合网 | 人妻少妇精品久久 | 中文字幕av无码一区二区三区电影 | 国产福利视频一区二区 | 少妇一晚三次一区二区三区 | 九九久久精品国产免费看小说 | 亚洲精品综合五月久久小说 | 精品成人av一区二区三区 | 免费人成网站视频在线观看 | 亚洲 a v无 码免 费 成 人 a v | 国产成人精品必看 | 成人欧美一区二区三区黑人免费 | 少妇被粗大的猛进出69影院 | 少妇愉情理伦片bd | 免费网站看v片在线18禁无码 | 亚洲国产欧美日韩精品一区二区三区 | 亚洲国产成人av在线观看 | 丝袜美腿亚洲一区二区 | 国产明星裸体无码xxxx视频 | 亚洲中文字幕成人无码 | 天天av天天av天天透 | 青青草原综合久久大伊人精品 | 老司机亚洲精品影院 | 女人被爽到呻吟gif动态图视看 | 亚洲乱码日产精品bd | 欧美喷潮久久久xxxxx | 一本色道久久综合狠狠躁 | 三上悠亚人妻中文字幕在线 | 麻豆国产97在线 | 欧洲 | 精品久久久中文字幕人妻 | 性做久久久久久久免费看 | 中文字幕无码免费久久99 | 日韩人妻无码一区二区三区久久99 | 无码人妻精品一区二区三区不卡 | 18禁黄网站男男禁片免费观看 | 乱中年女人伦av三区 | 欧美刺激性大交 | 国产香蕉尹人综合在线观看 | 377p欧洲日本亚洲大胆 | 十八禁真人啪啪免费网站 | 又大又黄又粗又爽的免费视频 | 国产成人一区二区三区别 | 久久久久久久人妻无码中文字幕爆 | 男人的天堂2018无码 | 扒开双腿吃奶呻吟做受视频 | 熟妇人妻激情偷爽文 | 免费人成网站视频在线观看 | 日欧一片内射va在线影院 | 双乳奶水饱满少妇呻吟 | 无码毛片视频一区二区本码 | 国产又爽又黄又刺激的视频 | 精品熟女少妇av免费观看 | 久久婷婷五月综合色国产香蕉 | 女人被男人躁得好爽免费视频 | 国产超级va在线观看视频 | 久久久婷婷五月亚洲97号色 | 亚洲一区二区三区偷拍女厕 | 人人澡人摸人人添 | 国产香蕉尹人视频在线 | 色综合视频一区二区三区 | 一个人看的视频www在线 | 夫妻免费无码v看片 | 久久国产精品二国产精品 | 亚洲精品国偷拍自产在线观看蜜桃 | 国内少妇偷人精品视频免费 | 97se亚洲精品一区 | 色欲av亚洲一区无码少妇 | 中文字幕乱码亚洲无线三区 | 国产成人无码专区 | 美女毛片一区二区三区四区 | 国产精品免费大片 | 亚洲经典千人经典日产 | 精品国产福利一区二区 | 亚洲精品国偷拍自产在线麻豆 | 成年美女黄网站色大免费全看 | 日本乱偷人妻中文字幕 | 婷婷丁香五月天综合东京热 | 欧洲欧美人成视频在线 | 亚洲人成影院在线观看 | 青青久在线视频免费观看 | 亚洲一区二区三区在线观看网站 | 人人妻人人藻人人爽欧美一区 | 亚洲午夜久久久影院 | 无码一区二区三区在线 | 日日碰狠狠躁久久躁蜜桃 | 无码帝国www无码专区色综合 | 蜜臀av无码人妻精品 | 欧洲熟妇精品视频 | 在线观看免费人成视频 | 麻豆av传媒蜜桃天美传媒 | 学生妹亚洲一区二区 | 中文字幕无码热在线视频 | 亚洲人成网站免费播放 | 国产成人无码午夜视频在线观看 | 欧洲欧美人成视频在线 | 国产高清不卡无码视频 | 色欲av亚洲一区无码少妇 | 日韩欧美中文字幕在线三区 | 精品国产成人一区二区三区 | 97夜夜澡人人爽人人喊中国片 | 国产精品va在线播放 | 亚洲精品中文字幕乱码 | 国产无套粉嫩白浆在线 | 无码人妻精品一区二区三区下载 | 色五月五月丁香亚洲综合网 | 国产精品国产自线拍免费软件 | 亚洲大尺度无码无码专区 | 色欲久久久天天天综合网精品 | 国色天香社区在线视频 | 国产精品内射视频免费 | 成人亚洲精品久久久久软件 | 亚洲精品午夜国产va久久成人 | 精品成在人线av无码免费看 | 女人和拘做爰正片视频 | 无码国模国产在线观看 | 日本爽爽爽爽爽爽在线观看免 | 丁香花在线影院观看在线播放 | 中文字幕久久久久人妻 | 亚洲色大成网站www国产 | 久久久久亚洲精品男人的天堂 | 欧洲欧美人成视频在线 | 国产午夜福利亚洲第一 | 麻豆国产人妻欲求不满谁演的 | a在线亚洲男人的天堂 | 台湾无码一区二区 | 午夜性刺激在线视频免费 | 在线 国产 欧美 亚洲 天堂 | 大屁股大乳丰满人妻 | 亚洲乱码国产乱码精品精 | 曰本女人与公拘交酡免费视频 | 美女黄网站人色视频免费国产 | 日本免费一区二区三区最新 | 亚洲精品综合一区二区三区在线 | 国产做国产爱免费视频 | 无码人妻精品一区二区三区下载 | 国产精品亚洲综合色区韩国 | 成 人 免费观看网站 | 小鲜肉自慰网站xnxx | 亚洲国产欧美日韩精品一区二区三区 | 免费无码一区二区三区蜜桃大 | 精品国产福利一区二区 | 久久天天躁夜夜躁狠狠 | 美女扒开屁股让男人桶 | 一本久久a久久精品vr综合 | 老熟女重囗味hdxx69 | 精品厕所偷拍各类美女tp嘘嘘 | 激情国产av做激情国产爱 | 国产亚洲美女精品久久久2020 | 国产免费久久久久久无码 | 狠狠色丁香久久婷婷综合五月 | 成人综合网亚洲伊人 | 高潮毛片无遮挡高清免费视频 | 国内少妇偷人精品视频 | 国产亚洲精品久久久闺蜜 | 强辱丰满人妻hd中文字幕 | 蜜臀av在线播放 久久综合激激的五月天 | 欧美性猛交xxxx富婆 | 久久99精品国产麻豆 | 99麻豆久久久国产精品免费 | 亚洲毛片av日韩av无码 | 无码免费一区二区三区 | 国产激情无码一区二区 | 婷婷色婷婷开心五月四房播播 | 18精品久久久无码午夜福利 | 纯爱无遮挡h肉动漫在线播放 | 夜精品a片一区二区三区无码白浆 | 中文精品久久久久人妻不卡 | 亚洲精品成人av在线 | 丁香花在线影院观看在线播放 | 狠狠色欧美亚洲狠狠色www | 国产在热线精品视频 | 日本熟妇浓毛 | 国精品人妻无码一区二区三区蜜柚 | 婷婷丁香五月天综合东京热 | 亚洲乱码中文字幕在线 | 免费无码一区二区三区蜜桃大 | 亚洲熟悉妇女xxx妇女av | 亚洲一区二区观看播放 | 在线精品亚洲一区二区 | 无码国产乱人伦偷精品视频 | 欧美精品一区二区精品久久 | 国产两女互慰高潮视频在线观看 | 亚洲精品鲁一鲁一区二区三区 | 精品国产av色一区二区深夜久久 | 国产欧美熟妇另类久久久 | 欧美人与物videos另类 | 一本久久a久久精品亚洲 | 俺去俺来也www色官网 | 少妇无码一区二区二三区 | 欧洲精品码一区二区三区免费看 | 老太婆性杂交欧美肥老太 | 国产精品久久久久9999小说 | 少妇无码av无码专区在线观看 | 99精品视频在线观看免费 | 亚洲国产日韩a在线播放 | 中文无码成人免费视频在线观看 | 大肉大捧一进一出好爽视频 | 亚洲综合无码一区二区三区 | 久久99久久99精品中文字幕 | 99久久人妻精品免费二区 | 久久久久久久久蜜桃 | 日韩精品久久久肉伦网站 | 中文精品无码中文字幕无码专区 | 俺去俺来也www色官网 | 国产内射爽爽大片视频社区在线 | 亚洲区欧美区综合区自拍区 | 日本一卡2卡3卡4卡无卡免费网站 国产一区二区三区影院 | 国内精品人妻无码久久久影院 | 极品嫩模高潮叫床 | 色五月五月丁香亚洲综合网 | 久久久精品欧美一区二区免费 | 欧美 日韩 人妻 高清 中文 | 国产凸凹视频一区二区 | 国产xxx69麻豆国语对白 | 男女下面进入的视频免费午夜 | 亚洲 日韩 欧美 成人 在线观看 | 日本肉体xxxx裸交 | 99精品无人区乱码1区2区3区 | 欧美野外疯狂做受xxxx高潮 | 日本爽爽爽爽爽爽在线观看免 | 无套内射视频囯产 | 日日摸日日碰夜夜爽av | 国产xxx69麻豆国语对白 | 狠狠色色综合网站 | 成年美女黄网站色大免费视频 | 少妇愉情理伦片bd | 亚洲精品综合一区二区三区在线 | 国产乱子伦视频在线播放 | 人人澡人摸人人添 | 特黄特色大片免费播放器图片 | 亚洲a无码综合a国产av中文 | 亚洲一区二区三区四区 | 国产手机在线αⅴ片无码观看 | 成人试看120秒体验区 | 国产精品第一国产精品 | 99久久久无码国产aaa精品 | 亚洲精品久久久久avwww潮水 | 无码国产乱人伦偷精品视频 | 又色又爽又黄的美女裸体网站 | 激情内射日本一区二区三区 | 国产精品久久久久7777 | 性啪啪chinese东北女人 | 最新国产麻豆aⅴ精品无码 | 久久综合给合久久狠狠狠97色 | 亚洲欧洲无卡二区视頻 | 国产精品永久免费视频 | 无码av最新清无码专区吞精 | 亚洲性无码av中文字幕 | 中文字幕无线码 | 亚洲经典千人经典日产 | 国产亚洲精品久久久闺蜜 | 色婷婷av一区二区三区之红樱桃 | 中文字幕乱码亚洲无线三区 | 日韩欧美中文字幕公布 | 午夜精品久久久久久久久 | 中文字幕乱码亚洲无线三区 | 亚洲精品国产a久久久久久 | 国产成人无码区免费内射一片色欲 | 久久精品视频在线看15 | 亚洲乱码国产乱码精品精 | 亚洲欧洲日本综合aⅴ在线 | 人人妻人人澡人人爽人人精品浪潮 | 玩弄中年熟妇正在播放 | 欧美xxxx黑人又粗又长 | 亚洲色大成网站www国产 | 国产精品久久久久久久9999 | 国产精品久久久久影院嫩草 | 午夜成人1000部免费视频 | 亚洲精品美女久久久久久久 | 久久久久99精品成人片 | 露脸叫床粗话东北少妇 | 日韩欧美中文字幕在线三区 | 男人扒开女人内裤强吻桶进去 | 成人一在线视频日韩国产 | 真人与拘做受免费视频 | 成人免费视频一区二区 | 日本在线高清不卡免费播放 | 高清国产亚洲精品自在久久 | 麻豆国产人妻欲求不满谁演的 | 亚洲狠狠色丁香婷婷综合 | 国产香蕉尹人综合在线观看 | 亚洲aⅴ无码成人网站国产app | 黑森林福利视频导航 | 国产人妻精品一区二区三区不卡 | 国产成人一区二区三区在线观看 | 亚洲一区二区三区香蕉 | 亚洲欧美日韩国产精品一区二区 | 一本色道久久综合亚洲精品不卡 | 国产av无码专区亚洲a∨毛片 | 国产精品久久久久久亚洲影视内衣 | 超碰97人人做人人爱少妇 | 亚洲熟妇自偷自拍另类 | 麻豆人妻少妇精品无码专区 | 亚洲国产一区二区三区在线观看 | 激情人妻另类人妻伦 | 人妻无码αv中文字幕久久琪琪布 | 成人无码影片精品久久久 | 色噜噜亚洲男人的天堂 | 人妻无码αv中文字幕久久琪琪布 | 国产精品久久久一区二区三区 | 天天躁夜夜躁狠狠是什么心态 | 亚洲熟妇色xxxxx欧美老妇y | 亚洲色大成网站www国产 | 亚洲人成人无码网www国产 | 国产手机在线αⅴ片无码观看 | 国产在线一区二区三区四区五区 | 国产在线精品一区二区三区直播 | 高清不卡一区二区三区 | 精品国产av色一区二区深夜久久 | 欧美激情一区二区三区成人 | 精品无码国产自产拍在线观看蜜 | 2020久久香蕉国产线看观看 | 十八禁视频网站在线观看 | 粉嫩少妇内射浓精videos | 国产一区二区三区精品视频 | 少妇邻居内射在线 | 黑森林福利视频导航 | 亚洲人成网站免费播放 | 国产精品久久久一区二区三区 | 成熟人妻av无码专区 | 色婷婷久久一区二区三区麻豆 | 自拍偷自拍亚洲精品被多人伦好爽 | 丰满护士巨好爽好大乳 | 亚洲精品一区二区三区大桥未久 | 无码帝国www无码专区色综合 | 国产亚洲精品久久久闺蜜 | 精品国偷自产在线视频 | 亚洲综合另类小说色区 | 少妇人妻偷人精品无码视频 | 97资源共享在线视频 | 国产激情精品一区二区三区 | 欧美三级不卡在线观看 | 亚洲熟妇色xxxxx亚洲 | 婷婷五月综合激情中文字幕 | 日日摸日日碰夜夜爽av | 四十如虎的丰满熟妇啪啪 | 日韩欧美群交p片內射中文 | 久久久久国色av免费观看性色 | 国产va免费精品观看 | 国产肉丝袜在线观看 | 久久 国产 尿 小便 嘘嘘 | 欧美成人高清在线播放 | 牲欲强的熟妇农村老妇女视频 | 女人被男人躁得好爽免费视频 | 亚洲日本va午夜在线电影 | 精品一区二区三区波多野结衣 | 国产精品第一国产精品 | 76少妇精品导航 | 一本一道久久综合久久 | 乱中年女人伦av三区 | 水蜜桃av无码 | 在线天堂新版最新版在线8 | 97精品国产97久久久久久免费 | 乱人伦人妻中文字幕无码久久网 | 丰满肥臀大屁股熟妇激情视频 | 精品人妻av区 | 色欲久久久天天天综合网精品 | 少妇久久久久久人妻无码 | 大色综合色综合网站 | 亚洲精品中文字幕乱码 | 免费国产成人高清在线观看网站 | 激情五月综合色婷婷一区二区 | 97久久超碰中文字幕 | 精品国产成人一区二区三区 | 欧美熟妇另类久久久久久不卡 | 国内精品九九久久久精品 | 伊人色综合久久天天小片 | 四虎永久在线精品免费网址 | 理论片87福利理论电影 | 精品偷自拍另类在线观看 | 色一情一乱一伦一区二区三欧美 | 国产人妻精品一区二区三区 | 国产精品久久久久久久9999 | 漂亮人妻洗澡被公强 日日躁 | 久久亚洲中文字幕精品一区 | 日日摸日日碰夜夜爽av | 狂野欧美性猛交免费视频 | 日韩人妻无码一区二区三区久久99 | 欧美阿v高清资源不卡在线播放 | 亚洲一区二区三区国产精华液 | a国产一区二区免费入口 | 成熟妇人a片免费看网站 | 日日天日日夜日日摸 | 亚洲国产av精品一区二区蜜芽 | 少妇高潮喷潮久久久影院 | 男女作爱免费网站 | 国产激情无码一区二区 | 国产香蕉尹人综合在线观看 | 国产在线精品一区二区三区直播 | 国产人成高清在线视频99最全资源 | 乱人伦人妻中文字幕无码久久网 | 成年美女黄网站色大免费全看 | 亚洲va中文字幕无码久久不卡 | 日韩精品无码一区二区中文字幕 | 国产成人一区二区三区在线观看 | 国产成人无码区免费内射一片色欲 | 麻豆国产人妻欲求不满谁演的 | 亚洲色大成网站www国产 | 久久aⅴ免费观看 | 在线亚洲高清揄拍自拍一品区 | 成熟女人特级毛片www免费 | 精品久久久久久人妻无码中文字幕 | 黑森林福利视频导航 | 一本色道久久综合狠狠躁 | 国内精品人妻无码久久久影院蜜桃 | 亚洲综合在线一区二区三区 | 午夜福利试看120秒体验区 | 国产乱码精品一品二品 | 老子影院午夜精品无码 | 永久免费观看国产裸体美女 | 黑人巨大精品欧美一区二区 | 国产综合色产在线精品 | 亚洲国产欧美日韩精品一区二区三区 | 六十路熟妇乱子伦 | 丰满少妇人妻久久久久久 | 九九在线中文字幕无码 | 丰满少妇女裸体bbw | 熟女俱乐部五十路六十路av | 国产舌乚八伦偷品w中 | 国产无遮挡又黄又爽免费视频 | 高清无码午夜福利视频 | 天天摸天天碰天天添 | 欧美刺激性大交 | 亚洲国产av精品一区二区蜜芽 | 无码人妻久久一区二区三区不卡 | 思思久久99热只有频精品66 | 久久99精品久久久久婷婷 | 国产精品美女久久久久av爽李琼 | 国产偷国产偷精品高清尤物 | 国产人妻精品午夜福利免费 | 精品少妇爆乳无码av无码专区 | 中文字幕日韩精品一区二区三区 | 精品无码av一区二区三区 | 久青草影院在线观看国产 | 欧美日韩一区二区免费视频 | 激情五月综合色婷婷一区二区 | 日欧一片内射va在线影院 | 性欧美videos高清精品 | 窝窝午夜理论片影院 | 国产精品亚洲一区二区三区喷水 | 美女扒开屁股让男人桶 | 亚洲熟悉妇女xxx妇女av | 亚洲欧美日韩成人高清在线一区 | 无码av最新清无码专区吞精 | 亚洲成a人一区二区三区 | 久久久久99精品国产片 | 婷婷综合久久中文字幕蜜桃三电影 | 中文亚洲成a人片在线观看 | 久久精品国产亚洲精品 | 久久久久久久久888 | 午夜精品一区二区三区在线观看 | 亚洲 激情 小说 另类 欧美 | 蜜臀av在线播放 久久综合激激的五月天 | 精品国产福利一区二区 | 波多野结衣 黑人 | 香蕉久久久久久av成人 | 在教室伦流澡到高潮hnp视频 | 欧美一区二区三区 | 欧美性猛交xxxx富婆 | 亚洲熟熟妇xxxx | 亚洲精品久久久久久一区二区 | 国产精品毛片一区二区 | 狠狠噜狠狠狠狠丁香五月 | 在线欧美精品一区二区三区 | 国产亚洲日韩欧美另类第八页 | 中文字幕 人妻熟女 | 欧美激情综合亚洲一二区 | 免费中文字幕日韩欧美 | 亚洲а∨天堂久久精品2021 | 国产又粗又硬又大爽黄老大爷视 | 国产人妻大战黑人第1集 | 亚洲男人av天堂午夜在 | 精品成人av一区二区三区 | 激情五月综合色婷婷一区二区 | 亚洲性无码av中文字幕 | 黄网在线观看免费网站 | 亚洲精品一区二区三区在线 | 日韩精品无码一本二本三本色 | 欧美精品无码一区二区三区 | 少妇性l交大片欧洲热妇乱xxx | 成人综合网亚洲伊人 | 精品人妻中文字幕有码在线 | 久久无码中文字幕免费影院蜜桃 | 一本无码人妻在中文字幕免费 | 蜜臀av在线播放 久久综合激激的五月天 | 沈阳熟女露脸对白视频 | 久久综合给合久久狠狠狠97色 | 亚洲一区二区三区 | 牲欲强的熟妇农村老妇女视频 | 国产莉萝无码av在线播放 | 美女扒开屁股让男人桶 | 国产免费久久精品国产传媒 | 国产真人无遮挡作爱免费视频 | 一个人看的视频www在线 | 国产无遮挡又黄又爽又色 | 亚洲精品久久久久久久久久久 | 亚洲经典千人经典日产 | 女人被男人躁得好爽免费视频 | 国产麻豆精品一区二区三区v视界 | 久久综合久久自在自线精品自 | 久久久久久亚洲精品a片成人 | 欧美xxxx黑人又粗又长 | 国产人妻精品一区二区三区 | 日韩 欧美 动漫 国产 制服 | 中文亚洲成a人片在线观看 | av小次郎收藏 | 丰满少妇人妻久久久久久 | 久久亚洲日韩精品一区二区三区 | 国产精品国产三级国产专播 | 久久久久久九九精品久 | 亚洲国精产品一二二线 | 国产熟女一区二区三区四区五区 | 日日碰狠狠丁香久燥 | 亚洲欧美精品伊人久久 | 中文字幕av伊人av无码av | 国产农村乱对白刺激视频 | 少妇久久久久久人妻无码 | 欧美兽交xxxx×视频 | 无码毛片视频一区二区本码 | 在线精品国产一区二区三区 | 99久久婷婷国产综合精品青草免费 | √8天堂资源地址中文在线 | 久久视频在线观看精品 | 熟女俱乐部五十路六十路av | 人人妻人人澡人人爽欧美一区九九 | 三级4级全黄60分钟 | 99久久亚洲精品无码毛片 | 一本精品99久久精品77 | 又紧又大又爽精品一区二区 | 国产精品国产自线拍免费软件 | a在线亚洲男人的天堂 | 中文字幕人妻无码一区二区三区 | 亚洲伊人久久精品影院 | 婷婷五月综合激情中文字幕 | 无套内谢的新婚少妇国语播放 | 免费国产成人高清在线观看网站 | 婷婷综合久久中文字幕蜜桃三电影 | 强辱丰满人妻hd中文字幕 | 亚洲国产欧美国产综合一区 | 亚洲综合精品香蕉久久网 | 久久99精品国产.久久久久 | 国产精品无码一区二区桃花视频 | 国产激情一区二区三区 | 国产在线精品一区二区三区直播 | 日韩亚洲欧美中文高清在线 | 欧美激情内射喷水高潮 | 亚洲色www成人永久网址 | 人人妻在人人 | 人妻无码久久精品人妻 | 日日麻批免费40分钟无码 | 成熟妇人a片免费看网站 | 无码国内精品人妻少妇 | 水蜜桃亚洲一二三四在线 | a在线亚洲男人的天堂 | 国产超碰人人爽人人做人人添 | 国产精品第一国产精品 | 久久综合给久久狠狠97色 | 人妻少妇精品无码专区二区 | 亚洲色偷偷偷综合网 | 丁香花在线影院观看在线播放 | 激情综合激情五月俺也去 | 精品夜夜澡人妻无码av蜜桃 | 亚洲а∨天堂久久精品2021 | 国产亚洲tv在线观看 | 亚洲а∨天堂久久精品2021 | 亚洲中文字幕av在天堂 | 99riav国产精品视频 | 亚洲娇小与黑人巨大交 | yw尤物av无码国产在线观看 | 国产av久久久久精东av | 在线欧美精品一区二区三区 | 牛和人交xxxx欧美 | 国产成人午夜福利在线播放 | 成人影院yy111111在线观看 | 日日摸夜夜摸狠狠摸婷婷 | 国产精品18久久久久久麻辣 | 国产精品无码一区二区三区不卡 | 无码午夜成人1000部免费视频 | 国精品人妻无码一区二区三区蜜柚 | 乌克兰少妇xxxx做受 | 日日麻批免费40分钟无码 | 国产免费久久久久久无码 | 成人精品视频一区二区三区尤物 | 国产性猛交╳xxx乱大交 国产精品久久久久久无码 欧洲欧美人成视频在线 | 久久无码中文字幕免费影院蜜桃 | 国产成人久久精品流白浆 | 精品成人av一区二区三区 | 3d动漫精品啪啪一区二区中 | 免费国产黄网站在线观看 | 国产综合在线观看 | 俄罗斯老熟妇色xxxx | 精品无码成人片一区二区98 | 红桃av一区二区三区在线无码av | 国产精品免费大片 | 欧美国产日产一区二区 | 久久99精品久久久久婷婷 | 亚洲毛片av日韩av无码 | 狠狠色噜噜狠狠狠7777奇米 | 精品久久久久香蕉网 | 无码人妻精品一区二区三区下载 | 色偷偷人人澡人人爽人人模 | 亚洲一区二区观看播放 | 3d动漫精品啪啪一区二区中 | 狠狠躁日日躁夜夜躁2020 | 国产深夜福利视频在线 | 强辱丰满人妻hd中文字幕 | 性欧美videos高清精品 | 97夜夜澡人人爽人人喊中国片 | 中文精品久久久久人妻不卡 | 免费看男女做好爽好硬视频 | 野狼第一精品社区 | 中文字幕无码免费久久9一区9 | 欧美熟妇另类久久久久久不卡 | 亚洲熟妇自偷自拍另类 | 丰满少妇高潮惨叫视频 | 成人女人看片免费视频放人 | 300部国产真实乱 | 色婷婷欧美在线播放内射 | 无码av免费一区二区三区试看 | 波多野结衣乳巨码无在线观看 | 免费国产黄网站在线观看 | 国产九九九九九九九a片 | 99久久人妻精品免费一区 | 亚洲男女内射在线播放 | aa片在线观看视频在线播放 | 国产午夜视频在线观看 | 欧美猛少妇色xxxxx | 国产精品亚洲а∨无码播放麻豆 | 亚洲 日韩 欧美 成人 在线观看 | 久久熟妇人妻午夜寂寞影院 | 午夜无码人妻av大片色欲 | 国精产品一品二品国精品69xx | 大地资源网第二页免费观看 | 欧美日韩亚洲国产精品 | 狠狠色噜噜狠狠狠狠7777米奇 | 国产乱人伦av在线无码 | 国产9 9在线 | 中文 | 日本精品人妻无码77777 天堂一区人妻无码 | 男女爱爱好爽视频免费看 | 国产性猛交╳xxx乱大交 国产精品久久久久久无码 欧洲欧美人成视频在线 | 色综合久久88色综合天天 | 日日夜夜撸啊撸 | 欧美乱妇无乱码大黄a片 | 波多野结衣aⅴ在线 | 天堂一区人妻无码 | 欧美老妇交乱视频在线观看 | 亚洲国产欧美日韩精品一区二区三区 | а√天堂www在线天堂小说 | 国产午夜视频在线观看 | 男人的天堂av网站 | 久久久精品人妻久久影视 | av人摸人人人澡人人超碰下载 | 亚洲欧美日韩国产精品一区二区 | 天天做天天爱天天爽综合网 | 成人无码精品一区二区三区 | 国产片av国语在线观看 | 又大又紧又粉嫩18p少妇 | 国产av无码专区亚洲awww | 日本www一道久久久免费榴莲 | 国产电影无码午夜在线播放 | 国产成人无码午夜视频在线观看 | 中文精品无码中文字幕无码专区 | 国产日产欧产精品精品app | 97人妻精品一区二区三区 | 日韩欧美中文字幕在线三区 | 国产sm调教视频在线观看 | 熟妇女人妻丰满少妇中文字幕 | 国产精品va在线观看无码 | 久久熟妇人妻午夜寂寞影院 | 日韩视频 中文字幕 视频一区 | 国产午夜亚洲精品不卡下载 | 男人的天堂2018无码 | 日日鲁鲁鲁夜夜爽爽狠狠 | 国产精品无码久久av | 亚洲熟熟妇xxxx | 久久99精品国产麻豆蜜芽 | 久久综合香蕉国产蜜臀av | 亚洲熟妇色xxxxx欧美老妇 | 综合激情五月综合激情五月激情1 | 无码纯肉视频在线观看 | 丰满人妻精品国产99aⅴ | www国产亚洲精品久久久日本 | 天天做天天爱天天爽综合网 | 亚洲综合无码久久精品综合 | 精品国精品国产自在久国产87 | 亚洲欧美日韩成人高清在线一区 | 丰满少妇人妻久久久久久 | 成人免费无码大片a毛片 | 久9re热视频这里只有精品 | 亚洲乱码国产乱码精品精 | 久久无码人妻影院 | 国产做国产爱免费视频 | 国产成人精品一区二区在线小狼 | 精品国产一区二区三区四区 | 内射巨臀欧美在线视频 | 六月丁香婷婷色狠狠久久 | 无码人妻出轨黑人中文字幕 | 麻豆国产人妻欲求不满 | 中文字幕av伊人av无码av | 欧美成人家庭影院 | 国产农村妇女aaaaa视频 撕开奶罩揉吮奶头视频 | www国产亚洲精品久久久日本 | 大肉大捧一进一出视频出来呀 | 性色av无码免费一区二区三区 | 99视频精品全部免费免费观看 | 国产午夜手机精彩视频 | 亚洲aⅴ无码成人网站国产app | 最近的中文字幕在线看视频 | 在线а√天堂中文官网 | 国产精华av午夜在线观看 | 久久精品丝袜高跟鞋 | 18黄暴禁片在线观看 | 少妇性l交大片欧洲热妇乱xxx | 国产激情精品一区二区三区 | 女人和拘做爰正片视频 | 亚洲一区二区三区四区 | 免费乱码人妻系列无码专区 | 特黄特色大片免费播放器图片 | 久久视频在线观看精品 | 内射巨臀欧美在线视频 | 亚洲成av人在线观看网址 | 久青草影院在线观看国产 | 中文字幕+乱码+中文字幕一区 | 67194成是人免费无码 | 亚洲精品国偷拍自产在线观看蜜桃 | 奇米影视7777久久精品人人爽 | 国产精品美女久久久久av爽李琼 | 牲交欧美兽交欧美 | 久久久久久av无码免费看大片 | 免费播放一区二区三区 | 亚洲成av人在线观看网址 | 精品久久综合1区2区3区激情 | 伊在人天堂亚洲香蕉精品区 | 国产成人人人97超碰超爽8 | 一二三四在线观看免费视频 | 成人动漫在线观看 | 亚洲色偷偷偷综合网 | 国产超级va在线观看视频 | 国产一区二区三区四区五区加勒比 | 亚洲乱码国产乱码精品精 | 99久久人妻精品免费二区 | 狂野欧美性猛xxxx乱大交 | 粉嫩少妇内射浓精videos | 国产精品内射视频免费 | 免费播放一区二区三区 | 亚洲精品国产第一综合99久久 | 男人的天堂av网站 | 欧美激情内射喷水高潮 | 99久久精品日本一区二区免费 | 自拍偷自拍亚洲精品10p | 国产精品视频免费播放 | 中文字幕乱码人妻无码久久 | 亚洲熟妇色xxxxx欧美老妇 | 国产精品va在线播放 | 亚洲经典千人经典日产 | 人人澡人人妻人人爽人人蜜桃 | 无码人妻出轨黑人中文字幕 | 欧美丰满少妇xxxx性 | 亚洲精品久久久久中文第一幕 | 亚洲の无码国产の无码影院 | 天海翼激烈高潮到腰振不止 | 中文字幕无码免费久久9一区9 | 亚洲欧美国产精品久久 | 东京热无码av男人的天堂 | 男人的天堂2018无码 | 中文字幕乱码亚洲无线三区 | 久久精品女人天堂av免费观看 | 任你躁在线精品免费 | 日日摸天天摸爽爽狠狠97 | 精品国产麻豆免费人成网站 | 欧美丰满熟妇xxxx性ppx人交 | 亚洲精品久久久久中文第一幕 | 国产精品手机免费 | a在线亚洲男人的天堂 | 日日鲁鲁鲁夜夜爽爽狠狠 | 牲欲强的熟妇农村老妇女 | 俺去俺来也www色官网 | 精品人人妻人人澡人人爽人人 | 乌克兰少妇xxxx做受 | 一本无码人妻在中文字幕免费 | 少妇性l交大片 | 久久久国产精品无码免费专区 | 日本肉体xxxx裸交 | 日韩精品无码一区二区中文字幕 | 精品久久久久香蕉网 | 天天爽夜夜爽夜夜爽 | 性色av无码免费一区二区三区 | 九九久久精品国产免费看小说 | 亚洲色欲色欲欲www在线 | 亚洲精品久久久久avwww潮水 | 国产在线一区二区三区四区五区 | 人妻少妇精品视频专区 | 国产猛烈高潮尖叫视频免费 | 国产无遮挡又黄又爽又色 | av在线亚洲欧洲日产一区二区 | 性啪啪chinese东北女人 | 亚洲精品中文字幕久久久久 | 欧美丰满少妇xxxx性 | 亚洲啪av永久无码精品放毛片 | 久久久www成人免费毛片 | 内射后入在线观看一区 | 十八禁真人啪啪免费网站 | 亚洲小说图区综合在线 | 亚洲成a人片在线观看日本 | 成人aaa片一区国产精品 | 精品久久综合1区2区3区激情 | 国产精品免费大片 | 久久精品中文闷骚内射 | 久久精品视频在线看15 | 国产精品久久国产三级国 | 思思久久99热只有频精品66 | 国产午夜福利100集发布 | 2020久久超碰国产精品最新 | 精品无码av一区二区三区 | 狠狠色丁香久久婷婷综合五月 | 欧美第一黄网免费网站 | 日韩少妇内射免费播放 | 日本丰满熟妇videos | 免费视频欧美无人区码 | 中文字幕色婷婷在线视频 | 免费视频欧美无人区码 | av人摸人人人澡人人超碰下载 | 无遮挡啪啪摇乳动态图 | 精品夜夜澡人妻无码av蜜桃 | 亚洲国产精品久久久天堂 | 无码人妻精品一区二区三区不卡 | 国产美女精品一区二区三区 | 国产在线精品一区二区高清不卡 | 国产在线无码精品电影网 | 最近免费中文字幕中文高清百度 | 蜜桃av蜜臀av色欲av麻 999久久久国产精品消防器材 | 日韩av无码中文无码电影 | 国语精品一区二区三区 | 黄网在线观看免费网站 | 国产亚av手机在线观看 | 久久无码中文字幕免费影院蜜桃 | 久久久久久a亚洲欧洲av冫 | 国产免费观看黄av片 | 成人无码视频免费播放 | 丰满人妻一区二区三区免费视频 | 中文精品无码中文字幕无码专区 | 九九久久精品国产免费看小说 | 欧美人与动性行为视频 | 欧美精品免费观看二区 | аⅴ资源天堂资源库在线 | 久久综合网欧美色妞网 | 久久久精品欧美一区二区免费 | 国内丰满熟女出轨videos | 美女毛片一区二区三区四区 | 1000部夫妻午夜免费 | 久久99精品久久久久久动态图 | 丝袜美腿亚洲一区二区 | 中文字幕无码免费久久99 | 人妻少妇被猛烈进入中文字幕 | 日韩欧美成人免费观看 | 欧美精品无码一区二区三区 | 欧美 日韩 人妻 高清 中文 | 国产成人综合色在线观看网站 | 欧洲欧美人成视频在线 | 丰满护士巨好爽好大乳 | 欧美人妻一区二区三区 | 欧美日韩一区二区免费视频 | 草草网站影院白丝内射 | 性史性农村dvd毛片 | 激情国产av做激情国产爱 | 国产69精品久久久久app下载 | 性欧美熟妇videofreesex | 国产亚洲精品久久久久久久 | 爽爽影院免费观看 | 精品国产麻豆免费人成网站 | 捆绑白丝粉色jk震动捧喷白浆 | 中文字幕无码人妻少妇免费 | 黑人巨大精品欧美一区二区 | 成人无码精品1区2区3区免费看 | www成人国产高清内射 | 18禁黄网站男男禁片免费观看 | 久久久久免费精品国产 | 精品国产麻豆免费人成网站 | 亚洲欧美日韩国产精品一区二区 | 国产精品福利视频导航 | 国产精品毛多多水多 | 中文字幕无线码免费人妻 | 捆绑白丝粉色jk震动捧喷白浆 | 高潮毛片无遮挡高清免费 | 国产精品久久精品三级 | 日韩人妻无码中文字幕视频 | 国产av久久久久精东av | 国产在线aaa片一区二区99 | 性生交大片免费看女人按摩摩 | 国产97在线 | 亚洲 | 亚洲精品国产精品乱码不卡 | 国产乱人伦av在线无码 | 少妇无套内谢久久久久 | 久久久www成人免费毛片 | 国产真实乱对白精彩久久 | 蜜桃无码一区二区三区 | 强开小婷嫩苞又嫩又紧视频 | 久久精品成人欧美大片 | 亚洲乱亚洲乱妇50p | 精品一二三区久久aaa片 | 亚洲国产精品毛片av不卡在线 | 国产激情无码一区二区 | 国产成人无码区免费内射一片色欲 | 亚洲精品一区二区三区婷婷月 | 久久精品国产99久久6动漫 | 国产精品国产三级国产专播 | 国内精品九九久久久精品 | 亚洲欧美日韩成人高清在线一区 | 精品无人区无码乱码毛片国产 | 97色伦图片97综合影院 | 亚洲精品无码国产 | 动漫av网站免费观看 | 97精品人妻一区二区三区香蕉 | 图片小说视频一区二区 | 国产色精品久久人妻 | 色诱久久久久综合网ywww | 亚洲欧洲无卡二区视頻 | 欧美性猛交内射兽交老熟妇 | 国产精品无码一区二区桃花视频 | 欧美精品在线观看 | 国产人妻大战黑人第1集 | 国产精品成人av在线观看 | 国产尤物精品视频 | 欧美午夜特黄aaaaaa片 | 日日碰狠狠丁香久燥 | 人人妻人人澡人人爽欧美一区九九 | 99在线 | 亚洲 | 少妇的肉体aa片免费 | 精品乱码久久久久久久 | 成人一区二区免费视频 | 欧美丰满熟妇xxxx | 中国女人内谢69xxxx | 国产人妻精品一区二区三区 | 国产97人人超碰caoprom | 国内少妇偷人精品视频免费 | 免费看少妇作爱视频 | 亚洲精品一区二区三区四区五区 | 中文久久乱码一区二区 | 国产一区二区三区四区五区加勒比 | 99久久人妻精品免费二区 | 亚洲精品欧美二区三区中文字幕 | 波多野结衣aⅴ在线 | 国产超级va在线观看视频 | 久久久精品成人免费观看 | 国产精品福利视频导航 | 秋霞成人午夜鲁丝一区二区三区 | 十八禁真人啪啪免费网站 | 欧美性生交xxxxx久久久 | 亚洲中文字幕乱码av波多ji | 久久久久久久人妻无码中文字幕爆 | 精品人妻中文字幕有码在线 | 色狠狠av一区二区三区 | 99视频精品全部免费免费观看 | 97夜夜澡人人双人人人喊 | 四虎永久在线精品免费网址 | 午夜熟女插插xx免费视频 | 国产激情艳情在线看视频 | 精品偷拍一区二区三区在线看 | 中文字幕精品av一区二区五区 | 无遮无挡爽爽免费视频 | 真人与拘做受免费视频一 | 午夜性刺激在线视频免费 | 熟妇人妻中文av无码 | 人人妻人人澡人人爽人人精品浪潮 | 99精品无人区乱码1区2区3区 | 精品人妻人人做人人爽夜夜爽 | 久久97精品久久久久久久不卡 | 国产精品美女久久久久av爽李琼 | 国产极品视觉盛宴 | 国产激情综合五月久久 | 日本熟妇人妻xxxxx人hd | 性啪啪chinese东北女人 | 国产精品美女久久久网av | 女人被男人爽到呻吟的视频 | 中文字幕日韩精品一区二区三区 | 精品熟女少妇av免费观看 | 国产成人午夜福利在线播放 | 亚洲热妇无码av在线播放 | 狠狠噜狠狠狠狠丁香五月 | 一本色道婷婷久久欧美 | 国产suv精品一区二区五 | 国产三级久久久精品麻豆三级 | 亚洲s码欧洲m码国产av | 国产另类ts人妖一区二区 | 西西人体www44rt大胆高清 | 久久精品99久久香蕉国产色戒 | 国产在线精品一区二区高清不卡 | 夜精品a片一区二区三区无码白浆 | 国产va免费精品观看 | 国产精品国产自线拍免费软件 | 成人精品一区二区三区中文字幕 | 国产美女极度色诱视频www | 99久久99久久免费精品蜜桃 | 老司机亚洲精品影院无码 | 中文字幕+乱码+中文字幕一区 | 麻豆成人精品国产免费 | 国产av剧情md精品麻豆 | 久久午夜夜伦鲁鲁片无码免费 |