泰晤士报下载_《泰晤士报》和《星期日泰晤士报》新闻编辑室中具有指标的冒险活动-第1部分:问题
泰晤士報下載
TLDR: Designing metrics that help you make better decisions is hard. In The Times and The Sunday Times newsrooms, we have spent a lot of time trying to tackle three particular problems.
TLDR :設計度量標準以幫助您做出更好的決策非常困難。 在《泰晤士報》和《星期日泰晤士報》的新聞編輯室中,我們花費了大量時間來嘗試解決三個特定問題。
How do we put metrics into context, so the people who use them can quickly understand if a value is good, bad or neither?
我們如何將指標放在上下文中 ,以便使用它們的人可以快速了解值是好是壞?
How do we account for uncertainty in metrics, so people don’t waste time agonising over insignificant differences?
我們如何處理指標的不確定性 ,這樣人們就不會浪費時間為微不足道的差異而苦惱?
How do we focus on the decisions that people should consider making when confronted with these metrics?
當面對這些指標時, 我們如何專注于人們應該考慮做出的決策?
None of these are novel problems, but they are rarely accounted for in web analytics tools, which present data in ways that — for us at least — are often meaningless without applying a lot of time and effort to interpret them. We’ve talked about some of this before but revisit the key problems here and will delve deeper into the practical aspects of trying to solve these in future posts.
這些都不是新穎的問題,但是它們很少在Web分析工具中得到解釋,該工具以至少對我們而言至少沒有意義的方式顯示數據,而無需花費大量時間和精力來解釋它們。 我們之前已經討論過其中的一些問題,但是在這里重新討論了關鍵問題,并將在以后的文章中深入探討嘗試解決這些問題的實際方面。
試圖用數據做決定 (Trying to make decisions with data)
“If a measurement matters at all, it is because it must have some conceivable effect on decisions and behaviour. If we can’t identify a decision that could be affected by a proposed measurement and how it could change those decisions, then the measurement simply has no value”
“如果一項度量根本很重要,那是因為它必須對決策和行為產生可想像的影響。 如果我們無法確定可能會受到建議的測量結果影響的決策以及它如何改變這些決策,那么該測量結果就毫無價值。”
Douglas W. Hubbard, How to Measure Anything: Finding the Value of Intangibles in Business, 2007
道格拉斯·哈伯德(Douglas W. Hubbard),《如何衡量一切:在企業中發現無形資產的價值》,2007年
Like many digital businesses we use web analytics tools that measure how visitors interact with our websites and apps. These tools provide dozens of simple metrics, but in our experience their value for informing a decision is close to zero without first applying a significant amount of time, effort and experience to interpret them.
像許多數字企業一樣,我們使用網絡分析工具來衡量訪問者如何與我們的網站和應用程序互動。 這些工具提供了數十種簡單的指標,但是根據我們的經驗,在沒有首先花費大量時間,精力和經驗來解釋它們的情況下,它們用于告知決策的價值幾乎為零。
Ideally we would like to use web analytics data to make inferences about what stories our readers value and care about. We can then use this to inform a range of decisions: what stories to commission, how many articles to publish, how to spot clickbait, which headlines to change, which articles to reposition on the page, and so on.
理想情況下,我們希望使用網絡分析數據來推斷讀者喜歡并關心的故事。 然后,我們可以使用它來告知一系列決策:要委托的故事,要發布的文章數,如何發現點擊誘餌,要更改的標題,要在頁面上重新放置的文章等等。
Finding what is newsworthy can not and should not be as mechanistic as analysing an e-commerce store, where the connection between the metrics and what you are interested in measuring (visitors and purchases) is more direct. We know that — at best — this type of data can only weakly approximate what readers really think, and too much reliance on data for making decisions will have predictable negative consequences. However, if there is something of value the data has to say, we would like to hear it.
查找具有新聞價值的內容不能也不應該像分析電子商務商店那樣機械化,在電子商務商店中,指標與您想要衡量的內容(訪問者和購買者)之間的聯系更加直接。 我們知道,充其量來說,此類數據只能弱化讀者的真實想法,過多地依賴數據進行決策將產生可預測的負面影響。 但是,如果數據需要說些有價值的話,我們希望聽到它。
Unfortunately, simple web analytics metrics fail to account for key bits of context that are vital if we want to understand if their values are higher or lower than what we should expect (and therefore interesting).
不幸的是,簡單的網絡分析指標無法說明關鍵的上下文環境 ,如果我們想了解它們的值是高于還是低于我們期望的值(因此很有趣),這是至關重要的。
Moreover, there is inherent uncertainty in the data we are using, and even if we can tell whether the value is higher or lower than expected, it is difficult to tell whether this is just down to chance.
而且,我們使用的數據存在固有的不確定性 ,即使我們可以判斷該值是高于還是低于預期,也很難判斷這是偶然的。
Good analysts, familiar with their domain often get good at doing the mental gymnastics required to account for context and uncertainty, so they can derive the insights that support good decisions. But doing this systematically when presented with a sea of metrics is rarely possible or the best use of an analyst’s valuable sense-making skills. Rather than all their time being spent trying to identify what is unusual, it would be better if their skills could be applied to learning why something is unusual or deciding how we might improve things. But if all of our attention is focused on the lower level what questions, we never get to the why or how questions — which is where we stand a chance of getting some value from the data.
熟悉其領域的優秀分析人員通常會擅長于進行考慮上下文和不確定性的心理體操,因此他們可以得出支持良好決策的見解。 但是,在提供大量指標的情況下,系統地執行此操作幾乎是不可能的,或者很難最大限度地利用分析師的有價值的判斷能力。 不是所有花費的時間試圖找出什么是不尋常的,它會更好,如果他們的技能可以應用到學習為什么有些不尋常或決定我們如何改善的事情。 但是,如果我們所有的注意力都集中在較低的水平有什么問題,我們永遠到不了為什么或如何的問題-這是我們的立場充分利用數據的一些價值的機會。
語境 (Context)
“The value of a fact shrinks enormously without context”
“事實的價值在沒有上下文的情況下會大大縮小”
Howard Wainer, Visual Revelations: Graphical Tales of Fate and Deception from Napoleon Bonaparte to Ross Perot, 1997
霍華德·威納(Howard Wainer),《 視覺啟示:從拿破侖·波拿巴到羅斯·佩羅的命運與欺騙的故事集》 ,1997年
Take two metrics that we would expect to be useful — how many people start reading an article (we call this readers), and how long they spend on it (we call this the average dwell time). If the metrics worked as intended, they could help us identify the stories our readers care about, but in their raw form, they tell us very little about this.
采取兩個我們期望會有用的指標-多少人開始閱讀一篇文章(我們稱為讀者 ),以及他們在該文章上花費多長時間(我們將其稱為平均停留時間 )。 如果這些指標能夠按預期工作,則可以幫助我們確定讀者所關心的故事,但是以原始形式,他們對此所知甚少。
Readers: If an article is in a more prominent position on the website or app, more people will see it and click on it.
讀者 :如果文章在網站或應用程序中的位置更為重要,則會有更多的人看到并單擊它。
Dwell time: If an article is longer, on average, people will tend to spend more time reading it.
停留時間:平均而言,如果一篇文章較長,人們會傾向于花更多的時間閱讀它。
Counting the number of readers tells us more about where an article was placed, and dwell time more about the length of the article than anything meaningful.
計算讀者數量可以告訴我們更多關于文章放置在何處的信息,而停留時間更多關于文章的長度而不是有意義的事情。
It’s not just length and position that matter. Other context such as the section, the day of the week, how long since it was published, and whether people are reading it on our website or apps all systematically influence these numbers. So much so, that we can do a reasonable job of predicting how many readers an article will get and how long they will spend on it by only looking at its context, and completely ignoring the content of the article.
重要的不只是長度和位置。 其他內容(例如,部分,一周中的一天,自發布之日起多長時間以及人們是否在我們的網站或應用上閱讀它)都會系統地影響這些數字。 如此多,以至于我們僅通過查看文章的上下文并完全忽略文章的內容,就可以做出合理的工作來預測一篇文章將獲得多少讀者以及他們將花多少時間。
From this perspective, articles are a victim of circumstance, and the raw metrics we see in so many dashboards tell us more about their circumstances than anything more meaningful?—?it’s all noise and very little signal.
從這個角度來看,文章是環境的受害者,我們在許多儀表板中看到的原始指標向我們介紹了他們的處境,而不是更有意義的事,這全都是噪音,很少有信號。
Knowing this, what we really want to understand is how much better or worse an article did than we would expect, given that context. In our newsroom, we do this by turning each metric (readers, dwell time and some others) into an index that compares the actual metric for an article to it’s expected value. We score it on a scale from 1 to 5, where 3 is expected, 4 or 5 is better than expected and 1 or 2 is worse than expected.
知道這一點,我們真正想了解的是,在這種情況下,一篇文章比我們預期的好還是壞。 在我們的新聞編輯室中,我們通過將每個指標(讀者,停留時間和其他指標)轉換為一個索引來將文章的實際指標與其預期值進行比較來實現此目的。 我們以1到5的等級給它打分,其中3是預期的,4或5比預期的好,1或2比預期的差。
Article A: a longer article in a more prominent position. Neither the number of readers nor the time they spent reading it was different from what we would expect (both indices = 3).文章A:較長的文章處于比較突出的位置。 讀者數量和閱讀時間都與我們預期的一樣(兩個索引= 3)。 Article B: a shorter article in a less prominent position. Whilst it had the expected number of readers (index = 3), they spent longer reading it than we would expect (index = 4).B條:較短的文章,位置不太重要。 盡管它具有預期的讀者數量(索引= 3),但他們花的閱讀時間比我們預期的長(索引= 4)。The figures above show how we present this information when looking at individual articles. Article A had 7,129 readers, more than four thousand more readers than article B, and people spent 2m 44s reading article A, almost a minute longer than article B. A simple web analytics display would pick article A as the winner on both counts by a large margin. And completely mislead us.
上圖顯示了在查看單個文章時我們如何顯示此信息。 A條擁有7,129位讀者,比B條多了四千名讀者,人們花了200萬44秒閱讀A條 ,比B條長了近一分鐘。 一個簡單的網絡分析顯示將在很大的程度上選擇文章A作為兩項的贏家。 并完全誤導我們。
Once we take into account the context, and calculate the indices, we find that both articles had about as many readers as we would expect, no more or less. Even though article B had four thousand fewer, it was in a less prominent position, and so we wouldn’t expect so many. However, people did spend longer reading article B than we would expect, given factors such as it’s length (it was shorter than article A).
一旦考慮了上下文并計算了索引,我們發現這兩篇文章的讀者人數與我們預期的一樣多,或多或少。 即使B條少了4000條,它的地位也不太重要,所以我們不會期望那么多。 但是,考慮到諸如篇幅的長度(比文章A短),人們閱讀文章B的時間確實比我們預期的要長。
The indices are the output of a predictive model, which predicts a certain value (e.g. number of readers), based on the context (the features in the model). The difference between the actual value and the predicted value (the residuals in the model) then form the basis of the index, which we rescale into the 1–5 score. An additional benefit is that we also have a common scale for different measures, and a common language for discussing these metrics across the newsroom.
索引是預測模型的輸出,該預測模型基于上下文(模型中的功能 )預測某個值(例如,讀者數量)。 然后,實際值與預測值(模型中的殘差 )之間的差構成該指數的基礎,我們將其重新換算為1-5得分。 另一個好處是,我們對于不同的度量標準也具有通用的標準,并且在整個新聞編輯室中討論這些度量標準時具有通用的語言。
Unless we account for context, we can only really use data for inspection: ‘Just tell me which article got me the most readers, I don’t care why’. If the article only had more readers because it was at the top of the edition we’re not learning anything useful from the data, and at worst it creates a self fulfilling feedback loop (more prominent articles get more readers — similar to the popularity bias that can occur in recommendation engines).
除非我們考慮上下文,否則我們只能真正使用數據進行檢查 : “只要告訴我哪一篇文章吸引了我最多的讀者,我不在乎為什么”。 如果該文章僅是該版本的頂部,那么該文章只吸引了更多讀者,那么我們就不會從數據中學到任何有用的東西,最糟糕的是,它會形成一個自我實現的反饋循環(更多的知名文章會吸引更多的讀者,這與人氣偏向相似)在推薦引擎中可能會發生)。
In his excellent book Upstream, Dan Heath talks about moving from data for inspection to data for learning. Data for learning is fundamental if we want to make better decisions. If we want to use data for learning in the newsroom, it’s incredibly useful to be able to identify which articles are performing better or worse than we would expect, but that is only ever the start. The real learning comes from what we do with that information, trying something different, and seeing if it has a positive effect on our readers’ experience.
Dan Heath在他的優秀著作“ 上游”中談到了從檢查數據到學習數據的轉變 。 如果我們要做出更好的決策,則學習數據至關重要。 如果我們想在新聞編輯室中使用數據進行學習,則能夠識別出哪些文章的表現好于或差于我們的預期,這是非常有用的,但這僅僅是開始。 真正的學習來自于我們對這些信息的處理方式,嘗試了不同的嘗試,并查看它是否對我們的讀者體驗產生積極影響。
“Using data for inspection is so common that leaders are sometimes oblivious to any other model.”
“使用數據進行檢查是如此普遍,以至于領導者有時會忽略任何其他模型。”
Dan Heath, Upstream: The Quest to Solve Problems Before They happen, 2020
丹·希思(Dan Heath),上游: 尋求在問題發生之前解決問題的探索,2020年
不確定 (Uncertainty)
“What is not surrounded by uncertainty cannot be truth”
“沒有不確定性的事物不能是真理”
Richard Feynman (probably)
理查德·費曼( 大概 )
The metrics presented in web analytics tools are incredibly precise. 7,129 people read the article we looked at earlier. How do we compare that to an article with 7,130 readers? What about one with 8,000? When presented with numbers, we can’t help making comparisons, even if we have no idea whether the difference matters.
Web分析工具中提供的指標非常精確。 7,129人閱讀了我們之前看過的文章。 我們如何將其與擁有7,130位讀者的文章進行比較? 那一個有8,000個的呢? 當用數字表示時,即使不知道差異是否重要,我們也無法進行比較。
We developed our indices to avoid meaningless comparisons that didn’t take into account context, but earlier versions of our indices were displayed in a way that suggested more preciseness than they provided — we used a scale from 0 to 200 (with 100* as expected).
我們開發索引的目的是避免不考慮上下文的??無意義的比較,但是索引的早期版本以比提供的精度更高的方式顯示-我們使用0到200的標度(預期值為100 * )。
*Originally we had 0 as our expected value, but quickly learnt that nobody likes having a negative score for their article, but something below 100 is more palatable.
*最初,我們的預期值為0,但很快就知道沒有人喜歡對其文章給予負面評分,但低于100的評分則更可口。
Predictably, people started worrying about small differences in the index values between articles. ‘This article scored 92 , but that one scored 103, that second article did better, let’s look at what we can learn from it’. Sadly the model we use to generate the index is not that accurate, and models, like data have uncertainty associated with them. Just as people agonise over small meaningless differences in raw numbers, the same was happening with the indices, and so we moved to a simple 5 point scale.
可以預見的是,人們開始擔心文章之間的索引值之間的微小差異。 “這篇文章獲得了92分,而一篇文章獲得了103分,第二篇文章表現更好,讓我們看看我們可以從中學到什么。” 遺憾的是,我們用來生成索引的模型并不那么準確,并且模型(例如數據)具有不確定性。 就像人們為原始數據的微小無意義的差異而苦惱時一樣,指數也發生了同樣的情況,因此我們采用了簡單的5分制。
Most articles get a 3, which can be interpreted as ‘we don’t think there is anything to see here, the article is doing as well as we’d expect on this measure’. An index of 2 or 1 means it is doing a bit worse or a lot worse than expected, and a 4 or a 5 means it is doing a bit better or a lot better than expected.
大多數文章的評分為3,這可以解釋為“我們認為這里沒有什么可看的,文章的效果與我們預期的一樣好”。 指數2或1表示它的表現比預期差或差很多,指數4或5表示它的表現比預期好或差很多。
In this format, the indices provide just enough information for us to know — at a glance — how an article is doing. We use this alongside other data visualisations of indices or raw metrics where more precision is helpful, but in all cases our aim is to help focus attention on what matters, and free up time to validate these insights and decide what to do with them.
索引以這種格式提供了足夠的信息,供我們一目了然地了解文章的運行情況。 我們將其與索引或原始指標的其他數據可視化結合使用,在這些數據可視化中,精度更高會有所幫助,但在所有情況下,我們的目標都是幫助將注意力集中在重要的事情上,并騰出時間來驗證這些見解并決定如何處理這些見解。
為什么背景和不確定性經常被忽略? (Why are context and uncertainty so often ignored?)
These problems are not new and covered in many great books on data sense-making — some are decades old, but more recently Howard Wainer, Stephen Few and R J Andrews.
這些問題并不是新出現的,并且在許多有關數據傳感的偉大著作中都有介紹-有些已經有幾十年的歷史了,但是最近出現的是Howard Wainer , Stephen Few和RJ Andrews 。
Practical guidance on dealing with uncertainty is easier to come by, but in our experience, thinking about context is trickier. From some perspectives this is odd. Predictive models — the bread and butter of data scientists — inherently deal with context as well as uncertainty, as do many of the tools for analysing time series data and detecting anomalies (such as statistical process control). But we are also taught to be cautious when making comparisons where there are fundamental differences between the things we are measuring. Since there are so many differences between the articles we publish, from length, position, who wrote them, what they are about, to the section and day of week on which they appear, we are left wondering whether we can or should use data to compare any of them. Perhaps the guidance on piecing all of this together to build better measurement metrics is less common, because how you deal with context is so contextual.
應對不確定性的實用指南更容易獲得,但是根據我們的經驗,對上下文的思考比較棘手。 從某些角度來看,這很奇怪。 預測模型(數據科學家的生死攸關)固有地處理了上下文以及不確定性,分析時間序列數據和檢測異常(例如統計過程控制 )的許多工具也是如此。 但是,我們也被教導在進行比較時要謹慎,當我們要衡量的事物之間存在根本差異時。 由于我們發表的文章之間存在如此多的差異,從篇幅,位置,撰寫者,內容,到出現的部分以及星期幾,我們都想知道是否可以或應該使用數據來比較其中任何一個。 也許將所有這些拼湊在一起以建立更好的度量標準的指南不太常見,因為您如何處理上下文取決于上下文。
Even if you set out on this path, there are many mundane reasons to fail. Often the valuable context is unavailable. It took us months to bring basic metadata about our articles— such as length and the position in which they appear— into the same system as the web analytics data. An even bigger obstacle is how much time it takes just to maintain a reliable metrics system (digital products are constantly changing, and this often breaks the web analytics data, including ours as I wrote this). Ideas for improving metrics often stay as ideas or proof of concepts that are not fully rolled out as you deal with these issues.
即使您踏上了這條路,失敗的原因很多。 通常有價值的上下文是不可用的 。 我們花了幾個月的時間才將與文章有關的基本元數據(例如文章的長度和出現的位置)帶到與Web分析數據相同的系統中。 更大的障礙是維護一個可靠的指標系統需要花費多少時間(數字產品在不斷變化,這經常會破壞Web分析數據,包括我撰寫本文時所涉及的數據)。 改善度量標準的想法通常仍然是想法或概念證明,在您處理這些問題時并未完全推出。
If you do get started, there are myriad choices to make to account for context and uncertainty— from technical to ethical?—?all involving value judgements. If you stick with a simple metric you can avoid these choices. Bad choices can derail you, but even if you make good ones, if you can’t adequately explain what you have done, you can’t expect the people who use the metrics to trust them. By accounting for context and uncertainty you may replace a simple (but not very useful) metric with something that is in theory more useful, but the opaqueness causes more problems than it solves. Even worse, people place too much trust in the metric and use it without questioning it.
如果您確實開始使用,則有很多選擇要考慮從技術到道德的背景和不確定性,這些都涉及價值判斷。 如果您堅持使用簡單的指標,則可以避免這些選擇。 錯誤的選擇可能會使您脫軌,但是即使您做出了不錯的選擇,如果您無法充分解釋自己所做的事情,也就無法期望使用這些指標的人們信任它們。 通過考慮上下文和不確定性,您可以用理論上更有用的東西代替簡單(但不是非常有用)的度量,但是不透明會導致更多的問題無法解決。 更糟糕的是,人們對度量標準過于信任,并在不質疑它的情況下使用它。
As for using data to make decisions. We will leave that for another post. But if the data is all noise and no signal, how do you present it in a clear way so the people using it understand what decisions it can help them make? The short answer is you can’t. But if the pressure is on to present some data, it is easier to passively display it in a big dashboard, filled with metrics and leave it to others to work out what to do, in the same way passive language can shield you if you have nothing interesting to say (or bullshit as Carl T. Bergstrom would call it). This is something else we have battled with, and we have tried to avoid replacing big dashboards filled with metrics with big dashboards filled with indices.
至于使用數據做決定。 我們將其留給其他職位。 但是,如果數據全是噪音,卻沒有信號,您如何以清晰的方式呈現數據,以便使用該數據的人員了解可以幫助他們做出哪些決定? 簡短的答案是你不能。 但是,如果要提供一些數據的壓力很大,則可以很容易地將其被動地顯示在一個充滿指標的大型儀表板中,然后交給其他人來確定要做什么,就像被動語言可以屏蔽您一樣沒什么好說的(或胡說八道的卡爾·T·伯格斯特姆稱之為胡話)。 這是我們一直在與之抗爭的另一件事,我們試圖避免將充滿指標的大型儀表板替換為充滿索引的大型儀表板。
Adding an R for reliable and an E for explainable, we end up with a checklist to help us avoid bad — or CRUDE — metrics (Context Reliability Uncertainty Decision orientated Explainability). Checklists are always useful, as it’s easy to forget what matters along the way.
添加了可靠的R 和可解釋的E,我們結束了一個清單,以幫助我們避免壞的-或原油-指標(C ontext [R eliabilityüncertainty d ecision定向的êxplainability)。 清單總是有用的,因為很容易忘記一路上最重要的事情。
Anybody promising a quick and easy path to metrics that solve all your problems is probably trying to sell you something. In our experience, it takes time and a significant commitment by everybody involved to build something better. If you don’t have this, it’s tough to even get started.
任何承諾快速,輕松地解決所有問題的指標的人都可能試圖向您出售產品。 根據我們的經驗,需要時間和每個參與人員做出重大承諾才能打造出更好的產品。 如果您沒有這個,甚至很難上手。
非人類指標 (Non-human metrics)
Part of the joy and pain of applying these principles to metrics used for analytics — that is, numbers that are put in front of people who then use them to help them make decisions — is that it provides a visceral feedback loop when you get it wrong. If the metrics cannot be easily understood, if they don’t convey enough information (or too much), if they are biased, or if they are unreliable or if they just look plain wrong vs. everything the person using them knows, you’re in trouble. Whatever the reason, you hear about it pretty quickly, and this is a good motivator for addressing problems head on if you want to maintain trust in the system you have built.
將這些原理應用于分析指標的樂趣和痛苦的一部分(即,數字出現在人們面前,然后他們使用這些指標來幫助他們做出決策)是當您弄錯了它時會提供內在的反饋循環。 如果度量標準不容易理解,如果它們傳達的信息不足(或過多),有偏見,不可靠或看起來像是錯誤的,而使用該工具的人所知道的一切,麻煩了。 無論出于何種原因,您都會很快聽到它的消息,如果您希望對已構建的系統保持信任,這將是解決當前問題的良好動力。
Many metrics are not designed to be consumed by humans. The metrics that live inside automated decision systems are subject to many of the same considerations, biases and value judgements. It is sobering to consider the number of changes and improvements we have made based on the positive feedback loop from people using our metrics in the newsroom on a daily basis. This is not the case with many automated decision systems.
許多指標并非旨在供人類使用。 自動決策系統中存在的度量標準受許多相同的考慮因素,偏見和價值判斷的影響。 基于每天在新聞編輯室中使用我們的指標的人們所產生的積極反饋循環,考慮到我們所做的更改和改進的數量是令人發人深省的。 對于許多自動化決策系統而言,情況并非如此。
For more background on INCA— the internal system our newsroom uses to access our metrics and indices — see here or The Digital Times, and we will be sharing more in upcoming posts.
有關INCA(新聞編輯室用來訪問指標和索引的內部系統)的更多背景信息,請參閱 此處 或 The Digital Times , 我們將在以后的文章中分享更多內容。
翻譯自: https://medium.com/news-uk-technology/adventures-with-metrics-in-a-newsroom-part-1-problems-81ff8ace132
泰晤士報下載
總結
以上是生活随笔為你收集整理的泰晤士报下载_《泰晤士报》和《星期日泰晤士报》新闻编辑室中具有指标的冒险活动-第1部分:问题的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 女人梦到性梦宫缩什么意思
- 下一篇: 异常检测机器学习_使用机器学习检测异常