3atv精品不卡视频,97人人超碰国产精品最新,中文字幕av一区二区三区人妻少妇,久久久精品波多野结衣,日韩一区二区三区精品

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

机器人学习--Hans Moravec在斯坦福博士论文1980年-Obstacle Avoidance and Navigation in the Real World by a Seeing Ro

發布時間:2025/3/13 编程问答 20 豆豆
生活随笔 收集整理的這篇文章主要介紹了 机器人学习--Hans Moravec在斯坦福博士论文1980年-Obstacle Avoidance and Navigation in the Real World by a Seeing Ro 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

Hans Moravec,占用柵格地圖的發明人。

?

Obstacle Avoidance and Navigation in the Real World by a Seeing Robot Rover

?

Hans Moravec
March 1980
Computer Science Department
Stanford University
(Ph.D. thesis)

Preface

The Stanford AI lab cart is a card-table sized mobile robot controlled remotely through a radio link, and equipped with a TV camera and transmitter. A computer has been programmed to drive the cart through cluttered indoor and outdoor spaces, gaining its knowledge about the world entirely from images broadcast by the onboard TV system.

The cart deduces the three dimensional location of objects around it, and its own motion among them, by noting their apparent relative shifts in successive images obtained from the moving TV camera. It maintains a model of the location of the ground, and registers objects it has seen as potential obstacles if they are sufficiently above the surface, but not too high. It plans a path to a user-specified destination which avoids these obstructions. This plan is changed as the moving cart perceives new obstacles on its journey.

The system is moderately reliable, but very slow. The cart moves about one meter every ten to fifteen minutes, in lurches. After rolling a meter, it stops, takes some pictures and thinks about them for a long time. Then it plans a new path, and executes a little of it, and pauses again.

The program has successfully driven the cart through several 20 meter indoor courses (each taking about five hours) complex enough to necessitate three or four avoiding swerves. A less sucessful outdoor run, in which the cart swerved around two obstacles but collided with a third, was also done. Harsh lighting (very bright surfaces next to very dark shadows) resulting in poor pictures, and movement of shadows during the cart's creeping progress, were major reasons for the poorer outdoor performance. These obstacle runs have been filmed (minus the very dull pauses).

Hans Moravec
March 2, 1980

?

Table of Contents

Chapter 1: ? Introduction
Chapter 2: ? History
Chapter 3: ? Overview
Chapter 4: ? Calibration
Chapter 5: ? Interest Operator
Chapter 6: ? Correlation
Chapter 7: ? Stereo
Chapter 8: ? Path Planning
Chapter 9: ? Evaluation
Chapter 10: ? Spinoffs
Chapter 11: ? Future Carts
Chapter 12: ? Connections

Appendix 1: ? Introduction
Appendix 2: ? History
Appendix 3: ? Overview
Appendix 6: ? Correlation
Appendix 7: ? Stereo
Appendix 8: ? Path Planning
Appendix 10: ? Spinoffs
Appendix 12: ? Connections

?

Acknowledgements

My nine year stay at the Stanford AI lab has been pleasant, but long enough to tax my memory. I hope not too many people have been forgotten.

Rod Brooks helped with most aspects of this work during the last two years and especially during the grueling final weeks before the lab move in 1979. Without his help my PhD-hood might have taken ten years.

Vic Scheinman has been a patron saint of the cart project since well before my involvement. Over the years he has provided untold many motor and sensor assemblies, and general mechanical expertise whenever requested. His latest contribution was the camera slider assembly which is the backbone of the cart's vision.

Don Gennery provided essential statistical geometry routines, and many useful discussions.

Mike Farmwald wrote several key routines in the display and vision software packages used by the obstacle avoider, and helped construct some of the physical environment which made cart operations pleasant.

Jeff Rubin pleasantly helped with the electronic design of the radio control link and other major components.

Marvin Horton provided support and an array of camera equiment, including an impressive home built ten meter hydraulic movie crane for the filming of the final cart runs.

Others who have helped recently are Harlyn Baker, Peter Blicher, Dick Gabriel, Bill Gosper, Elaine Kant, Mark LeBrun, Robert Maas, Allan Miller, Lynne Toribara and Polle Zellweger.

My debts in the farther past are many, and my recollection is sporadic. I remember particularly the difficult time reconstructing the cart's TV transmitter. Bruce Bullock, Tom Gafford, Ed McGuire and Lynn Quam made it somewhat less traumatic.

Delving even farther, I wish to thank Bruce Baumgart for radiating a pleasantly (and constructively) wild eyed attitude about this line of work, and Rod Schmidt, whom I have never met, for building the hardware that made my first five years of cart work possible.

In addition I owe very much to the unrestrictive atmosphere created at the lab mainly by John McCarthy and Les Earnest, and maintained by Tom Binford, and also to the excellent system support provided to me (over the years) by Marty Frost, Ralph Gorin, Ted Panofsky and Robert Poor.

Hans Moravec, 1980

Chapter 1: Introduction

This is a report about a modest attempt at endowing a mild mannered machine with a few of the attributes of higher animals.

An electric vehicle, called the cart, remote controlled by a computer, and equipped with a TV camera through which the computer can see, has been programmed to run undemanding but realistic obstacle courses.



Figure 1.1: The cart, like a card table, but taller

The methods used are minimal and crude, and the design criteria were simplicity and performance. The work is seen as an evolutionary step on the road to intellectual development in machines. Similar humble experiments in early vertebrates eventually resulted in human beings.



Figure 1.2: SRI's Shakey and JPL's Robotics Research Vehicle

The hardware is also minimal. The television camera is the cart's only sense organ. The picture perceived can be converted to an array of numbers in the computer of about 256 rows and 256 columns, with each number representing up to 64 shades of gray. The cart can drive forwards and back, steer its front wheels and move its camera from side to side. The computer controls these functions by turning motors on and off for specific lengths of time.

Better (at least more expensive) hardware has been and is being used in similar work elsewhere. SRI's Shakey moved around in a contrived world of giant blocks and clean walls. JPL is trying to develop a semi-autonomous rover for the exploration of Mars and other far away places (the project is currently mothballed awaiting resumption of funding). Both SRI's and JPL's robots use laser rangefinders to determine the distance of nearby objects in a fairly direct manner. My system, using less hardware and more computation, extracts the distance information from a series of still pictures of the world from different points of view, by noting the relative displacement of objects from one picture to the next.

?

Applications

?

A Mars rover is the most likely near term use for robot vehicle techniques. The half hour radio delay between Earth and Mars makes direct remote control an unsatisfactory way of guiding an exploring device. Automatic aids, however limited, would greatly extend its capabilities. I see my methods as complementary to approaches based on rangefinders. A robot explorer will have a camera in addition to whatever other sensors it carries. Visual obstacle avoidance can be used to enhance the reliability of other methods, and to provide a backup for them.

Robot submersibles are almost as exotic as Mars rovers, and may represent another not so distant application of related methods. Remote control of submersibles is difficult because water attenuates conventional forms of long distance communication. Semi-autonomous minisubs could be useful for some kinds of exploration and may finally make seabed mining practical.

In the longer run the fruits of this kind of work can be expected to find less exotic uses. Range finder approaches to locating obstacles are simpler because they directly provide the small amount of information needed for undemanding tasks. As the quantity of information to be extracted increases the amount of processing, regardless of the exact nature of the sensor, will also increase.

What a smart robot thinks about the world shouldn't be affected too much by exactly what it sees with. Low level processing differences will be mostly gone at intermediate and high levels. Present cameras offer a more detailed description of the world than contemporary rangefinders and camera based techniques probably have more potential for higher visual functions.

The mundane applications are more demanding than the rover task. A machine that navigates in the crowded everyday world, whether a robot servant or an automatic car, must efficiently recognize many of the things it encounters to be safe and effective. This will require methods and processing power beyond those now existing. The additional need for low cost guarantees they will be a while in coming. On the other hand work similar to mine will eventually make them feasible.

?

Chapter 2: History

This work was shaped to a great extent by its physical circumstances; the nature and limitations of the cart vehicle itself, and the resources that could be brought to bear on it. The cart has always been the poor relation of the Stanford Hand-Eye project, and has suffered from lack of many things, not the least of which was sufficient commitment and respect by any principal investigator.

The cart was built in the early 1960's by a group in the Stanford Mechanical Engineering Department under a NASA contract, to investigate potential solutions for the problems of remote controlling a lunar rover from Earth. The image from an onboard TV camera was broadcast to a human operator who manipulated a steering control. The control signals were delayed for two and a half seconds by a tape loop, then broadcast to the cart, simulating the Earth/Moon round trip delay.

The AI lab, then in its enthusiastic spit and baling wire infancy, acquired the cart gratis from ME after they were done, minus its video and remote control electronics. Rod Schmidt, an EE graduate student and radio amateur was induced to work on restoring the vehicle, and driving it under computer control. He spent over two years, but little money, single-handedly building a radio control link based on a model airplane controller, and a UHF TV link. The control link was relatively straightforward, the video receiver was a modified TV set, but the UHF TV transmitter took 18 laborious months of tweaking tiny capacitors and half centimeter turns of wire. The resulting robot was ugly, reflecting its rushed assembly, and marginally functional (the airplane proportional controller was very inaccurate). Like an old car, it needed (and needs) constant repair and replacement of parts, major and minor, that break.

Schmidt then wrote a program for the PDP-6 which drove the cart in real time (but with its motors set to run very slowly) along a wide white line. It worked occasionally. Following a white line with a raised TV camera and a computer turns out to be much more difficult than following a line at close range with a photocell tracker. The camera scene is full of high contrast extraneous detail, and the lighting conditions are unreliable. This simple program taxed the processing power of the PDP-6. It also clearly demonstrated the need for more accurate and reliable hardware if more ambitious navigation problems were to be tackled. Schmidt wrote up the results and finished his degree.

Bruce Baumgart picked up the cart banner, and announced an ambitious approach that would involve modelling the world in great detail, and by which the cart could deduce its position by comparing the image it saw through its camera with images produced from its model by a 3D drawing program. He succeeded reasonably well with the graphics end of the problem.



Figure 2.1: The old AI lab and some of the surrounding terrain, about 1968

The real world part was a dismal failure. He began with a rebuild of the cart control electronics, replacing the very inaccurate analog link with a supposedly more repeatable digital one. He worked as single-handedly as did Schmidt, but without the benefit of prior experience with hardware construction. The end result was a control link that, because of a combination of design flaws and undetected bugs, was virtually unusable. One time out of three the cart moved in a direction opposite to which it had been commanded, left for right or forwards for backwards.

During this period a number of incoming students were assigned to the “cart project”. Each correctly perceived the situation within a year, and went on to something else. The cart's reputation as a serious piece of research apparatus, never too high, sank to new depths.

I came to the AI lab, enthusiastic and naive, with the specific intention of working with the cart. I'd built a series of small robots, beginning in elementary school, and the cart, of whose existence, but not exact condition, I'd learned, seemed like the logical next step. Conditions at the lab were liberal enough that my choice was not met with termination of financial support, but this largesse did not easily extend to equipment purchases.

Lynn Quam, who had done considerable work with stereo mapping from pictures from the Mariners 6 and 7 Mars missions, expressed an interest in the cart around this time, for its clear research value for Mars rovers. We agreed to split up the problem (the exact goals for the collaboration were never completely clear; mainly they were to get the cart to do as much as possible). He would do the vision, and I would get the control hardware working adequately and write motor subroutines which could translate commands like move a meter forward and a half to the right into appropriate steering and drive signals.

I debugged, then re-designed and rebuilt the control link to work reliably, and wrote a routine that incorporated a simulation of the cart, to drive it (this subroutine was resurrected in the final months of the obstacle avoider effort, and is described in chapter 8). I was very elated by my quick success, and spent considerable time taking the cart on joy rides. I would open the big machine room doors near the cart's parking place, and turn on the cart. Then I would rush to my office, tune in the cart video signal on a monitor, start a remote control program, and, in armchair and air conditioned comfort, drive the cart out the doors. I would steer it along the outer deck of the lab to one of three ramps on different sides of the building. I then drove it down the narrow ramp (they were built for deliveries), and out into the driveway or onto the grass, to see (on my screen) what there was to see. Later I would drive it back the same way, then get up to close the doors and power it down. With increasing experience, I became increasingly cocky. During the 1973 IJCAI, held at Stanford, I repeatedly drove it up and down the ramps, and elsewhere, for the amusement of the crowds visiting the AI lab during an IJCAI sponsored winetasting.

Shortly after the IJCAI my luck ran out. Preparing to drive it down the front ramp for a demonstration, I misjudged the position of the right edge by a few centimeters. The cart's right wheels missed the ramp, and the picture on my screen slowly rotated 90°, then turned into noise. Outside, the cart was lying on its side, with acid from its batteries spilling into the electronics. Sigh.

The sealed TV camera was not damaged. The control link took less than a month to resurrect. Schmidt's video transmitter was another matter. I spent a total of nine frustrating months first trying, unsuccessfully, to repair it, then building (and repeatedly rebuilding) a new one from the old parts using a cleaner design found in a ham magazine and a newly announced UHF amplifier module from RCA. The new one almost worked, though its tuning was touchy. The major problem was a distortion in the modulation. The RCA module was designed for FM, and did a poor job on the AM video signal. Although TV sets found the broadcast tolerable, our video digitizer was too finicky.

During these nine difficult months I wrote to potential manufacturers of such transmitters, and also inquired about borrowing the video link used by Shakey, which had been retired by SRI. SRI, after due deliberation, turned me down. Small video transmitters are not off the shelf items; the best commercial offer I got was for a two watt transmitter costing $4000.

Four kilobucks was an order of magnitude more money than had ever been put into cart hardware by the AI lab, though it was was considerably less than had been spent on salary in Schmidt's 18 months and my 9 months of transmitter hacking. I begged for it and got an agreement from John McCarthy that I could buy a transmitter, using ARPA money, after demonstrating a capability to do vision.

During the next month I wrote a program that picked a number of features in one picture (the “interest operator” of?Chapter 5?was invented here) of a motion stereo pair, and found them in the other image with a simple correlator, did a crude distance calculation, and generated a fancy display. Apparently this was satisfactory; the transmitter was ordered.

By this time Quam had gone on to other things. With the cart once again functional, I wrote a program that drove it down the road in a straight line by servoing on points it found on the distant horizon with the interest operator and tracked with the correlator. Like the current obstacle avoider, it did not run in real time, but in lurches. That task was much easier, and even on the KA-10, our main processor at the time, each lurch took at most 15 seconds of real time. The distance travelled per lurch was variable; as small as a quarter meter when the program detected significant variations from its desired straight path, repeatedly doubling up to many meters when everything seemed to be working. The program also observed the cart's response to commands, and updated a response model which it used to guide future commands. The program was reliable and fun to watch, except that the remote control link occasionally failed badly. The cause appeared to be interference from passing CBers. The citizens band boom had started, and our 100 milliwatt control link, which operated in the CB band, was not up to the competition.

I replaced the model airplane transmitter and receiver by standard (but modified) CB transceivers, increasing the broadcast power to 5 watts. To test this and a few other improvements in the hardware, I wrote an updated version of the horizon tracker which incorporated a new idea, the faster and more powerful “binary search” correlator of?Chapter 6. This was successful, and I was ready for bigger game.

Obstacle avoidance could be accomplished using many of the techniques in the horizon tracker. A dense cloud of features on objects in the world could be tracked as the cart rolled forward, and a 3D model of their position and the cart's motion through them could be deduced from their relative motion in the image. Don Gennery had already written a camera solving subroutine, used by Quam and Hannah, which was capable of such a calculation.

I wrote a program which drove the cart, tracking features near and far, and feeding them to Gennery's subroutine. The results were disappointing. Even after substantial effort, aggravated by having only a very poor a priori model of cart motion, enough of the inevitable correlation errors escaped detection to make the camera solver converge to the wrong answer about 10 to 20% of the time. This error rate was too high for a vehicle that would need to navigate through at least tens of such steps. Around this time I happened to catch some small lizards, that I kept for a while in a terrarium. Watching them, I observed an interesting behavior.

The lizards caught flies by pouncing on them. Since flies are fast, this requires speed and 3D precision. Each lizard had eyes on opposite sides of its head; the visual fields could not overlap significantly, ruling out stereo vision. But before a pounce, a lizard would fix an eye on its victim, and sway its head slowly from side to side. This seemed a sensible way to range.

My obstacle avoiding task was defeating the motion stereo approach, and the lizard's solution seemed promising. I built a stepping motor mechanism that could slide the cart's camera from side to side in precise increments. The highly redundant information available from this apparatus broke the back of the problem, and made the obstacle avoider that is the subject of this thesis possible.

Chapter 3: Overview

A typical run of the avoider system begins with a calibration of the cart's camera. The cart is parked in a standard position in front of a wall of spots. A calibration program (described in?Chapter 4) notes the disparity in position of the spots in the image seen by the camera with their position predicted from an idealized model of the situation. It calculates a distortion correction polynomial which relates these positions, and which is used in subsequent ranging calculations.



Figure 3.1: The cart in its calibration pose

The cart is then manually driven to its obstacle course. Typically this is either in the large room in which it lives, or a stretch of the driveway which encircles the AI lab. Chairs, boxes, cardboard constructions and assorted debris serve as obstacles in the room. Outdoors the course contains curbing, trees, parked cars and signposts as well.



Figure 3.2: The cart indoors



Figure 3.3: The cart outdoors

The obstacle avoiding program is started. It begins by asking for the cart's destination, relative to its current position and heading. After being told, say, 50 meters forward and 20 to the right, it begins its maneuvers.

It activates a mechanism which moves the TV camera, and digitizes about nine pictures as the camera slides (in precise steps) from one side to the other along a 50 cm track.



Figure 3.4: A closeup of the slider mechanism

?

A subroutine called the?interest operator?(described in?Chapter 5) is applied to the one of these pictures. It picks out 30 or so particularly distinctive regions (features) in this picture. Another routine called the?correlator?(Chapter 6) looks for these same regions in the other frames. A program called the?camera solver?(Chapter 7) determines the three dimensional position of the features with respect to the cart from their apparent movement image to image.

The?navigator?(Chapter 8) plans a path to the destination which avoids all the perceived features by a large safety margin. The program then sends steering and drive commands to the cart to move it about a meter along the planned path. The cart's response to such commands is not very precise.

After the step forward the camera is operated as before, and nine new images are acquired. The control program uses a version of the correlator to find as many of the features from the previous location as possible in the new pictures, and applies the camera solver. The program then deduces the cart's actual motion during the step from the apparent three dimensional shift of these features.

The motion of the cart as a whole is larger and less constrained than the precise slide of the camera. The images between steps forward can vary greatly, and the correlator is usually unable to find many of the features it wants. The interest operator/correlator/camera solver combination is used to find new features to replace lost ones.

The three dimensional location of any new features found is added to the program's model of the world. The navigator is invoked to generate a new path that avoids all known features, and the cart is commanded to take another step forward.

This continues until the cart arrives at its destination or until some disaster terminates the program.

Appendix 3?documents the evolution of the cart's internal world model in response to the scenery during a sample run.

An Objection

A method as simple as this is unlikely to handle every situation well. The most obvious problem is the apparently random choice of features tracked. If the interest operator happens to avoid choosing any points on a given obstruction, the program will never notice it, and might plan a path right through it.

The interest operator was designed to minimize this danger. It chooses a relatively uniform scattering of points over the image, locally picking those with most contrast. Effectively it samples the picture at low resolution, indicating the most promising regions in each sample area.

Objects lying in the path of the vehicle occupy ever larger areas of the camera image as the cart rolls forward. The interest operator is applied repeatedly, and the probability that it will choose a feature or two on the obstacle increases correspondingly. Typical obstructions are generally detected before its too late. Very small or very smooth objects are sometimes overlooked.

Chapter 4: Calibration

Figure 4.1: The cart in its calibration posture before the calibration pattern. A program automatically locates the cross and the spots, and deduces the camera's focal length and distortion.

The cart camera, like most vidicons, has peculiar geometric properties. Its precision has been enhanced by an automatic focal length and distortion determining program.

The cart is parked a precise distance in front of a wall of many spots and one cross (Figure 4.1). The program digitizes an image of the spot array, locates the spots and the cross, and constructs a a two dimensional polynomial that relates the position of the spots in the image to their position in an ideal unity focal length camera, and another polynomial that converts points from the ideal camera to points in the image. These polynomials are used to correct the positions of perceived objects in later scenes.



Figure 4.2: The spot array, as digitized by the cart camera

The program tolerates a wide range of spot parameters (about 3 to 12 spots across), arbitrary image rotation, and is very robust. After being intensely fiddled with to work successfully on an initial set of 20 widely varying images, it has worked without error on 50 successive images. The test pattern for the cart is a 3 meter square painted on a wall, with 5 cm spots at 30 cm intervals. The program has also been used successfully with a small array (22 x 28 cm) to calibrate cameras other than the cart's \ref(W1).

?



Figure 4.3: Power spectrum of Figure 4.2, and folded transform

Figure 4.4: Results of the calibration program. The distortion polynomial it produced has been used to map an undistorted grid of ideal spot positions into the calculated real world ones. The result is superimposed on the original digitized spot image, making any discrepancies obvious.

?



Figure 4.5: Another instance of the distortion corrector at work; a longer focal length lens

?



Figure 4.6: Yet another example; a rotation

?



Figure 4.7: And yet another example

The algorithm reads in an image of such an array, and begins by determining its approximate spacing and orientation. It trims the picture to make it square, reduces it by averaging to 64 by 64, calculates the Fourier transform of the reduced image and takes its power spectrum, arriving at a 2D transform symmetric about the origin, and having strong peaks at frequencies corresponding to the horizontal and vertical and half-diagonal spacings, with weaker peaks at the harmonics. It multiplies each point $[i,j]$ in this transform by point $[-j,i]$ and points $[j-i,j+i]$ and $[i+j,j-i]$, effectively folding the primary peaks onto one another. The strongest peak in the 90° wedge around the $Y$ axis gives the spacing and orientation information needed by the next part of the process.

The directional variance interest operator described later (Chapter 5) is applied to roughly locate a spot near the center of the image. A special operator examines a window surrounding this position, generates a histogram of intensity values within the window, decides a threshold for separating the black spot from the white background, and calculates the centroid and first and second moment of the spot. This operator is again applied at a displacement from the first centroid indicated by the orientation and spacing of the grid, and so on, the region of found spots growing outward from the seed.

A binary template for the expected appearance of the cross in the middle of the array is constructed from the orientation/spacing determined determined by the Fourier transform step. The area around each of the found spots is thresholded on the basis of the expected cross area, and the resulting two valued pattern is convolved with the cross template. The closest match in the central portion of the picture is declared to be the origin.

Two least-squares polynomials (one for $X$ and one for $Y$) of third (or sometimes fourth) degree in two variables, relating the actual positions of the spots to the ideal positions in a unity focal length camera, are then generated and written into a file.

The polynomials are used in the obstacle avoider to correct for camera roll, tilt, focal length and long term variations in the vidicon geometry.

Chapter 5: Interest Operator

The cart vision code deals with very simple primitive entities, localized regions called features. A feature is conceptually a point in the three dimensional world, but it is found by examining localities larger than points in pictures. A feature is good if it can be located unambiguously in different views of a scene. A uniformly colored region or a simple edge does not make for good features because its parts are indistinguishable. Regions, such as corners, with high contrast in orthogonal directions are best.

New features in images are picked by a subroutine called the?interest operator, an example of whose operation is displayed in Figure 5-1. It tries to select a relatively uniform scattering, to maximize the probability that a few features will be picked on every visible object, and to choose areas that can be easily found in other images. Both goals are achieved by returning regions that are local maxima of a directional variance measure. Featureless areas and simple edges, which have no variance in the direction of the edge, are thus avoided.

Figure 5.1: A cart's eye view from the starting position of an obstacle run, and features picked out by the interest operator. They are labelled in order of decreasing interest measure.

Figure 5.2: A typical interest operator window, and the four sums calculated over it ($P_{I,J}$ are the pixel bightnesses). The interest measure of the window is the minimum of the four sums.

Figure 5.3: The twenty five overlapping windows considered in a local maximum decision. The smallest cells in the diagram are individual pixels. The four by four array of these in the center of the image is the window being considered as a local maximum. In order for it to be chosen as a feature to track, its interest measure must equal or exceed that of each of the other outlined four by four areas.

Directional variance is measured over small square windows. Sums of squares of differences of pixels adjacent in each of four directions (horizontal, vertical and two diagonals) over each window are calculated, and the window's interest measure is the minimum of these four sums.

Features are chosen where the interest measure has local maxima. The feature is conceptually the point at the center of the window with this locally maximal value.

This measure is evaluated on windows spaced half a window width apart over the entire image. A window is declared to contain an interesting feature if its variance measure is a local maximum, that is, if it has the largest value of the twenty five windows which overlap or contact it.

The variance measure depends on adjacent pixel differences and responds to high frequency noise in the image. The effects of noise are alleviated and the processing time is shortened by applying the operator to a reduced image. In the current program original images are 240 lines high by 256 pixels wide. The interest operator is applied to the 120 by 128 version, on windows 3 pixels square.



Figure 5.4: Another obstacle run interest operator application



Figure 5.5: More interest operating

The local maxima found are stored in an array, sorted in order of decreasing variance.

The entire process on a typical 260 by 240 image, using 6 by 6 windows takes about 75 milliseconds on the KL-10. The variance computation and local maximum test are coded in FAIL (our assembler) \ref(WG1), the maxima sorting and top level are in SAIL (an Algol-like language) \ref(R1).

Once a feature is chosen, its appearance is recorded as series of excerpts from the reduced image sequence. A window (6 by 6 in the current implementation) is excised around the feature's location from each of the variously reduced pictures. Only a tiny fraction of the area of the original (unreduced) image is extracted. Four times as much of the x2 reduced image is stored, sixteen times as much of the x4 reduction, and so on until at some level we have the whole image. The final result is a series of 6 by 6 pictures, beginning with a very blurry rendition of the whole picture, gradually zooming in linear expansions of two to a sharp closeup of the feature. Of course, it records the appearance correctly from only one point of view.

Weaknesses

The interest operator has some fundamental limitations. The basic measure was chosen to reject simple edges and uniform areas. Edges are not suitable features for the correlator because the different parts of an edge are indistinguishable.

The measure is able to unambiguously reject edges only if they are oriented along the four directions of summation. Edges whose angle is an odd multiple of 22.5° give non-zero values for all four sums, and are sometimes incorrectly chosen as interesting.

The operator especially favors intersecting edges. These are sometimes corners or cracks in objects, and are very good. Sometimes they are caused by a distant object peering over the edge of a nearby one and then they are very bad. Such spurious intersections don't have a definite distance, and must be rejected during camera solving. In general they reduce the reliability of the system.

Desirable Improvements

The operator has a fundamental and central role in the obstacle avoider, and is worth improving. Edge rejection at odd angles should be increased, maybe by generating sums in the 22.5° directions.

Rejecting near/far object intersections more reliably than the current implementation does is possible. An operator that recognized that the variance in a window was restricted to one side of an edge in that window would be a good start. Really good solutions to this problem are probably computationally much more expensive than my measure.

Chapter 6: Correlation

Deducing the 3D location of features from their projections in 2D images requires that we know their position in two or more such images.

The correlator is a subroutine that, given a description of a feature as produced by the interest operator from one image, finds the best match in a different, but similar, image. Its search area can be the entire new picture, or a rectangular sub-window.

Figure 6.1: Areas matched in a binary search correlation. Picture at top contains originally chosen feature. The outlined areas in it are the prototypes which are searched for in the bottom picture. The largest rectangle is matched first, and the area of best match in the second picture becomes the search area for the next smaller rectangle. The larger the rectangle, the lower the resolution of the pictures in which the matching is done.

?

Figure 6.2: The “conventional” representation of a feature used in documents such as this one, and a more realistic version which graphically demonstrates the reduced resolution of the larger windows. The bottom picture was reconstructed entirely from the window sequence used with a binary search correlation. The coarse outer windows were interpolated to reduce quantization artifacts.

The search uses a coarse to fine strategy, illustrated in Figure 6-1, that begins in reduced versions of the pictures. Typically the first step takes place at the $\times 16$ (linear) reduction level. The $6 \times 6$ window at that level in the feature description, that covers about one seventh of the total area of the original picture, is convolved with the search area in the correspondingly reduced version of the second picture. The $6 \times 6$ description patch is moved pixel by pixel over the approximately $15$ by $16$ destination picture, and a correlation coefficient is calculated for each trial position.

The position with the best match is recorded. The $6 \times 6$ area it occupies in the second picture is mapped to the $\times 8$ reduction level, where the corresponding region is $12$ pixels by $12$. The $6 \times 6$ window in the $\times 8$ reduced level of the feature description is then convolved with this $12$ by $12$ area, and the position of best match is recorded and used as a search area for the $\times 4$ level.

The process continues, matching smaller and smaller, but more and more detailed windows until a $6 \times 6$ area is selected in the unreduced picture.

The work at each level is about the same, finding a $6 \times 6$ window in a $12$ by $12$ search area. It involves 49 summations of 36 quantities. In our example there were 5 such levels. The correlation measure used is ${2\sum ab}/({\sum a^2}+{\sum b^2})$, where $a$ and $b$ are the values of pixels in the two windows being compared, with the mean of windows subtracted out, and the sums are taken over the $36$ elements of a $6 \times 6$ window. The measure has limited tolerance to contrast differences.

The window sizes and other parameters are sometimes different from the ones used in this example.

In general, the program thus locates a huge general area around the feature in a very coarse version of the images, and successively refines the position, finding smaller and smaller areas in finer and finer representations. For windows of size $n$, the work at each level is approximately that of finding an $n$ by $n$ window in a $2n$ by $2n$ area, and there are $\log_2(w/n)$ levels, where $w$ is the smaller dimension of the search rectangle, in unreduced picture pixels.

This approach has many advantages over a simple pass of of a correlation coefficient computation over the search window. The most obvious is speed. A scan of an $8 \times 8$ window over a $256$ by $256$ picture would require $249 \times 249 \times 8 \times 8$ comparisons of individual pixels. The binary method needs only about $5 \times 81 \times 8 \times 8$, about $150$ times fewer. The advantage is lower for smaller search areas. Perhaps more important is the fact that the simple method exhibits a serious jigsaw puzzle effect. The $8 \times 8$ patch is matched without any reference to context, and a match is often found in totally unrelated parts of the picture. The binary search technique uses the general context to guide the high resolution comparisons. This makes possible yet another speedup, because smaller windows can be used. Window sizes as small as $2 \times 2$ work reasonably well. The searches at very coarse levels rarely return mismatches, possibly because noise is averaged out in the reduction process, causing comparisons to be more stable. Reduced images are also more tolerant of geometric distortions.

Figure 6.3: Example of the correlator's performance on a difficult example. The interest operator has chosen features in the upper image, and the correlator has attempted to find corresponding regions in the lower one. The cart moved about one and a half meters forward between the images. Some mistakes are evident. The correlator had no a-priori knowledge about the relationship of the two images and the entire second image was searched for each feature.



Figure 6.4: An outdoor application of the binary search correlator

The current routine uses a measure for the measure for the cross correlation which I call?pseudo normalized, given by the formula $${2 \sum{ab} \over \sum{a^2} + \sum{b^2}}$$ that has limited contrast sensitivity, avoids the degeneracies of normalized correlation on informationless windows, and is slightly cheaper to compute. A description of its derivation may be found in?Appendix 6.

Timing

The formula above is expressed in terms of $A$ and $B$ with the means subtracted out. It can be translated into an expression involving $\sum{A}$, $\sum{A^2}$, $\sum{B}$, $\sum{B^2}$ and $\sum{(A-B)^2}$. By evaluating the terms involving only $A$, the source window, outside of the main correlation loop, the work in the inner loop can be reduced to evaluating $\sum{B}$, $\sum{B^2}$ and $\sum{(A-B)^2}$. This is done in three PDP-10 machine instructions per point by using a table in which entry $i$ contains both $i$ and $i^2$ in subfields, and by generating in-line code representing the source window, three instructions per pixel, eliminating the need for inner loop end tests and enabling the $A-B$ computation to be done during indexing.

Each pixel comparison takes about one microsecond. The time required to locate an $8 \times 8$ window in a $16$ by $16$ search area is about $10$ milliseconds. A single feature requires $5$ such searches, for a total per feature time of $50$ ms.

One of the three instructions could be eliminated if $\sum{B}$ and $\sum{B^2}$ were precomputed for every position in the picture. This can be done incrementally, involving examination of each pixel only twice, and would result in an overall speedup if many features are to be searched for in the same general area.

The correlator has approximately a 10% error rate on features selected by the interest operator in our sample pictures. Typical image pairs are generally taken about two feet apart with a $60$° field of view camera.

Chapter 7: Stereo

Slider Stereo

At each pause on its computer controlled itinerary the cart slides its camera from left to right on the 52 cm track, taking 9 pictures at precise 6.5 cm intervals.

Points are chosen in the fifth (middle) of these 9 images, either by the correlator to match features from previous positions, or by the interest operator.

?


Figure 7.1: A typical ranging. The nine pictures are from a slider scan. The interest operator chose the marked feature in the central image, and the correlator found it in the other eight. The small curves at bottom are distance measurements of the feature made from pairs of the images. The large beaded curve is the sum of the measurements over all 36 pairings. The horizontal scale is linear in inverse distance.

?



Figure 7.2: Ranging a distant feature

?



Figure 7.3: Ranging in the presence of a correlation error. Note the mis-match in the last image. Correct feature pairs accumulate probability at the correct distance, while pairs with the incorrect feature dissipate their probability over a spread of distances.

?

The camera slides parallel to the horizontal axis of the (distortion corrected) camera co-ordinate system, so the parallax-induced apparent displacement of features from frame to frame in the 9 pictures is purely in the X direction.

The correlator looks for the points chosen in the central image in each of the eight other pictures. The search is restricted to a narrow horizontal band. This has little effect on the computation time, but it reduces the probability of incorrect matches.

In the case of correct matches, the distance to the feature is inversely proportional to its displacement from one image to another. The uncertainty in such a measurement is the difference in distance a shift one pixel in the image would make. The uncertainty varies inversely with the physical separation of the camera positions where the pictures were taken (the stereo baseline). Long baselines give more accurate distance measurements..

After the correlation step the program knows a feature's position in nine images. It considers each of the 36 ($={9 \choose 2}$) possible image pairings as a stereo baseline, and records the estimated distance to the feature (actually inverse distance) in a histogram. Each measurement adds a little normal curve to the histogram, with mean at the estimated distance, and standard deviation inversely proportional to the baseline, reflecting the uncertainty. The area under the each curve is made proportional to the product of the correlation coefficients of the matches in the two images (in central image this coefficient is taken as unity), reflecting the confidence that the correlations were correct. The area is also scaled by the normalized dot products of X axis and the shift of the features in each of the two baseline images from the central image. That is, a distance measurement is penalized if there is significant motion of the feature in the Y direction.

The distance to the feature is indicated by the largest peak in the resulting histogram, if this peak is above a certain threshold. If below, the feature is forgotten about.

The correlator frequently matches features incorrectly. The distance measurements from incorrect matches in different pictures are usually inconsistent. When the normal curves from 36 pictures pairs are added up, the correct matches agree with each other, and build up a large peak in the histogram, while incorrect matches spread themselves more thinly. Two or three correct correlations out of the eight will usually build a peak sufficient to offset a larger number of errors.

In this way eight applications of a mildly reliable operator interact to make a very reliable distance measurement. Figures 7-1 through 7-3 show typical rangings. The small curves are measurements from individual picture pairs, the beaded curve is the final histogram.

Motion Stereo

The cart navigates exclusively by vision. It deduces its own motion from the apparent 3D shift of the features around it.

After having determined the 3D location of objects at one position, the computer drives the cart about a meter forward.

At the new position it slides the camera and takes nine pictures. The correlator is applied in an attempt to find all the features successfully located at the previous position. Feature descriptions extracted from the central image at the last position are searched for in the central image at the new stopping place.

Slider stereo then determines the distance of the features so found from the cart's new position. The program now knows the 3D position of the features relative to its camera at the old and the new locations. It can figure out its own movement by finding the 3D co-ordinate transform that relates the two.

There can be mis-matches in the correlations between the central images at two positions and, in spite of the eight way redundancy, the slider distance measurements are sometimes in error. Before the cart motion is deduced, the feature positions are checked for consistency. Although it doesn't yet have the co-ordinate transform between the old and new camera systems, the program knows the distance between pairs of positions should be the same in both. It makes a matrix in which element $[i,j]$ is the absolute value of the difference in distances between points $i$ and $j$ in the first and second co-ordinate systems divided by the expected error (based on the one pixel uncertainty of the ranging).

Figure 7.4: The feature list before and after the mutual-distance pruning step. In this diagram the boxes represent features whose three dimensional position is known.

Figure 7.5: Another pruning example, in more difficult circumstances. Sometimes the pruning removed too many points. The cart collided with the cardboard tree to the left later in this run.

Each row of this matrix is summed, giving an indication of how much each point disagrees with the other points. The idea is that while points in error disagree with virtually all points, correct positions agree with all the other correct ones, and disagree only with the bad ones.

The worst point is deleted, and its effect is removed from the remaining points in the row sums. This pruning is repeated until the worst error is within the error expected from the ranging uncertainty.

After the pruning, the program has a number of points, typically 10 to 20, whose position error is small and pretty well known. The program trusts these, and records them in its world model, unless it had already done so at a previous position. The pruned points are forgotten forevermore.

Now comes the co-ordinate transform determining step. We need to find a three dimensional rotation and translation that, if applied to the co-ordinates of the features at the first position, minimizes the sum of the squares of the distances between the transformed first co-ordinates and the raw co-ordinates of the corresponding points at the second position. Actually the quantity that's minimized is the foregoing sum, but with each term divided by the square of the uncertainty in the 3D position of the points involved, as deduced from the one pixel shift rule. This weighting does not make the solution more difficult.

The error expression is expanded. It becomes a function of the rotation and translation, with parameters that are the weighted averages of the $x$, $y$ and $z$ co-ordinates of the features at the two positions, and averages of their various cross-products. These averages need to be determined only once, at the begining of the transform finding process.

To minimize the error expression, its partial derivative with respect to each variable is set to zero. It is relatively easy to simultaneously solve the three linear equations thus resulting from the vector offset, getting the optimal offset values for a general rotation. This gives symbolic expressions (linear combinations of the rotation matrix coefficients) for each of the three vector components. Substituting these values into the error expression makes it a function of the rotation alone. This new, translation determined, error expression is used in all the subsequent steps.

Minimizing the error expression under rotation is surprisingly difficult, mainly because of the non-linear constraints in the 3D rotation matrix. The next six paragraphs outline the struggle. Each step was forced by the inadequacies of the previous one.

The program begins by ignoring the non-linearities. It solves for the general 3D linear transformation, nine elements of a matrix, that minimizes the least square error. The derivatives of the error expression with respect to each of the matrix coefficients are equated to zero, and the nine resulting simultaneous linear equations are solved for the nine coefficients. If the points had undergone an error-free rigid rotation and translation between the two positions, the result would be the desired rotation matrix, and the problem would be solved.

Because there are errors in the determined position of the features, the resulting matrix is usually not simply a rotation, but involves stretching and skewing. The program ortho-normalizes the matrix. If the position errors were sufficiently small, this new matrix would be our answer.

The errors are high enough to warrant adding the rigid rotation constraints in the least squares minimization. The error expression is converted from a linear expression in nine matrix coefficients into an unavoidably non-linear function in three parameters that uniquely characterize a rotation.

This new error expression is differentiated with respect to each of the three rotation parameters, and the resulting expressions are equated to zero, giving us three non-linear equations in three unknowns. A strenuous attempt at an analytic solution of this simultaneous non-linear system failed, so the program contains code to solve the problem iteratively, by Newton's method.

The rotation expressed by the ortho-normalized matrix from the previous step becomes the initial approximation. Newton's method for a multi-variate system involves finding the partial derivative of each expression whose root is sought with respect to each variable. In our case there are three variables and three equations, and consequently nine such derivatives. The nine derivatives, each a closed form expression of the rotation variables, are the coefficients of a 3 by 3 covariance matrix that characterizes the first order changes in the expressions whose roots are sought with the parameters. The next Newton's method approximation is found by multiplying the inverse of this matrix by the value of the root expressions, and subtracting the resulting values (which will be 0 at the root) from the parameter values of the previous approximation.

Four or five iterations usually brings the parameters to within our floating point accuracy of the correct values. Occasionally, when the errors in the determined feature locations are high, the process does not converge. The program detects this by noting the change in the original error expression from iteration to iteration. In case of non-convergence, the program picks a random rotation as a new starting point, and tries again. It is willing to try up to several hundred times. The rotation with the smallest error expression ever encountered during such a search (including the initial approximation) is returned as the answer.

Since the summations over the co-ordinate cross-products are done once and for all at the begining of the transformation determination, each iteration, involving evaluation of about a dozen moderately large expressions and a 3 by 3 matrix inversion, is relatively fast. The whole solving process, even in cases of pathological non-convergence, takes one or two seconds of computer time.

Appendix 7?presents the mathematics of the transform finder in greater detail.

Chapter 8: Path Planning

The cart vision system has an extremely simple minded approach to the world. It models everything it sees as clusters of points. If enough such points are found on each nearby object, this model is adequate for planning a non-colliding path to a destination.

The features in the cart's 3D world model can be thought of as fuzzy ellipsoids, whose dimensions reflect the program's uncertainty of their position. Repeated applications of the interest operator as the cart moves cause virtually all visible objects to be become modelled as clusters of overlapping ellipsoids.

To simplify the problem, the ellipsoids are approximated by spheres. Those spheres sufficiently above the floor and below the cart's maximum height are projected on the floor as circles. The cart itself is modelled as a 3 meter circle. The path finding problem then becomes one of maneuvering the cart's 3 meter circle between the (usually smaller) circles of the potential obstacles to a desired location.

It is convenient (and equivalent) to conceptually shrink the cart to a point, and add its radius to each and every obstacle. An optimum path in this environment will consist of either a straight run between start and finish, or a series of tangential segments between the circles and contacting arcs (imagine loosely laying a string from start to finish between the circles, then pulling it tight).

Superficially, the problem seems to be one of finding the shortest path in a graph of connected vertices. The tangential segments are the edges of the graph, the obstacles, along with the destination and source, are the vertices. There are algorithms (essentially breadth first searches, that repeatedly extend the shortest path to any destination encountered) which, given the graph, can find the desired path in $O(n^2)$ time, where $n$ is the number of vertices. On closer inspection, a few complications arise when we try to apply such an algorithm.

There are four possible paths between each pair of obstacles (Figure 8.1). because each tangent can approach clockwise or counterclockwise. Expanding each obstacle into two distinct vertices, one for clockwise circumnavigations, the other for counterclockwise paths, handles this.



Figure 8.1: The four tangential paths between circular obstacles A and B

Setting up the distance matrix of the graph involves detecting which of the tangential paths are not allowed, because they blocked by other obstacles (such blocked paths are represented by infinite distances). There are $O(n^2)$ tangent paths between obstacle pairs. Determining whether each particular path is blocked involves examining at least a fraction of the other obstacles, a process that takes $O(n)$ time. Thus generating the distance graph, whether explicitly before running the shortest path algorithm, or implicitly within the algorithm itself, takes $O(n^3)$ time. With this consideration, the algorithm is $O(n^3)$.

The obstacles are not dimensionless points. Arriving on one tangent and leaving on another also involves travel on the circular arc between the tangents. Furthermore, paths arriving at an obstacle tangentially from different places do not end up at the same place. Our circular obstacles occupy a finite amount of space. Both these considerations can be handled by noting that the there are only a finite number of tangent points around each obstacle we need consider, and these tangent points are dimensionless.

Each obstacle develops four tangent points because of the existence of every other obstacle. A path problem with $n$ circular obstacles can thus be translated exactly into a shortest path in graph problem with $4n(n-1)$ vertices, each edge in the graph corresponding to a tangent between two obstacles plus the arc leading from one end of the tangent path to the beginning of another one. The solution time thus appears to grow to $O(n^4)$. Fundamentally, this is correct, but significant shortcuts are possible.

Figure 8.2: The shortest path finder's solution to a randomly constructed problem. The route is from the lower left corner to the uper right. The numbered circles are the obstacles, the wiggly line is the solution.



Figure 8.3: Another path finder solution

Figure 8.4: A case where the approximate and exact methods differed. Top diagram is the exact solution, bottom one is the approximate algorithm's guess.

The distance matrix for the tangent points is extremely sparse. In our possible solution space, each tangent point leading from an obstacle connects to only about $2n$ others, out of the $4n(n-1)$ possible. This fact can be used to reduce the amount of work from $O(n^4)$ to about $O(n^3)$.?Appendix 8?gives the details.

The algorithm just outlined finds the guaranteed shortest obstacle avoiding path from start to finish. It is rather expensive in time, and especially in space. It requires several two dimensional arrays of size $n$ by $n$. The number of obstacles sometimes grows to be about 100. Because both storage and running time needed conservation, the final version of the cart program used a simplified, and considerably cheaper, approximation to this approach.

The simplified program, also described in greater detail in?Appendix 8, does not distinguish between different tangent points arriving at a single obstacle. Instead of a very sparse distance matrix of size $4n(n-1)$ squared, it deals with a dense matrix of dimension $2n$ by $2n$. Many of the arrays that were of size $n^2$ in the full algorithm are only of dimension $n$ in the cheap version. The arc lengths for travel between tangents are added into the computed distances, but sometimes too late to affect the search. If the obstacles were all of zero radius, this simple algorithm would still give an exact solution. As obstacle size grows, so does the probability of non-optimal solutions.

In randomly generated test cases containing about fifty typical obstacles, the approximation finds the best solution about 90% of the time. In the other cases it produces solutions only slightly longer.

A few other considerations are essential in the path planning. The charted routes consist of straight lines connected by tangent arcs, and are thus plausible paths for the cart, which steers like an automobile. This plausibility is not necessarily true of the start of the planned route, which, as presented thus far, does not take the initial heading of the cart into account. The plan could, for instance, include an initial segment going off 90° from the direction in which the cart points, and thus be impossible to execute.

The current code handles this problem by including a pair of “phantom” obstacles along with the real perceived ones. The phantom obstacles have a radius equal to the cart's minimum steering radius, and are placed, in the planning process, on either side of the cart at such a distance that after their radius is augmented by the cart's radius (as happens for all the obstacles), they just touch the cart's centroid, and each other, with their common tangents being parallel to the direction of the cart's heading. They effectively block the area made inaccessible to the cart by its maneuverability limitations.

In the current program the ground plane, necessary to decide which features are obstacles, and which are not, is defined a priori, from the known height of the cart camera above the floor, and the angle of the camera with respect to the horizontal (measured before a run by a protractor/level). Because the program runs so slowly that the longest feasible travel distance is about 20 meters, this is adequate for now. In later, future, versions the cart should dynamically update its ground plane orientation model by observing its own motion as it drives forward. The endpoints of each meter-long lurch define a straight line that is parallel to the local ground. The vector component of the ground plane model in the direction of the lurch can be tilted to match the observed cart motion, while the component perpendicular to that is be left unchanged. After moving in two non-colinear lurches, all ground-plane orientation parameters would be updated. This process would allow the cart to keep its sanity while traversing hilly terrain. Because the motion determination has short term inaccuracies, the tilt model should be updated only fractionally at each move, in the manner of exponential smoothing.

Path Execution

After the path to the destination has been chosen, a portion of it must be implemented as steering and motor commands and transmitted to the cart. The control system is primitive. The drive motor and steering motors may be turned on and off at any time, but there exists no means to accurately determine just how fast or how far they have gone. The current program makes the best of this bad situation by incorporating a model of the cart that mimics, as accurately as possible, the cart's actual behavior. Under good conditions, as accurately as possible means about 20%; the cart is not very repeatable, and is affected by ground slope and texture, battery voltage, and other less obvious externals.

Figure 8.5: An example of the simulator's behavior. The diagram is a plan view of the path executer's world model; the grid cells are one meter on a side. The cart's starting position and final destination and orientation are indicated by arrows. The two large circles, only portions of which are visible, represent the analytic two-arc path. It goes from?Start?through the tangent of the two circles to?Finish. The heavier paths between the two points represent the iterations of the simulator as its parameters were adjusted to compensate for the cart's dynamic response.

The path executing routine begins by excising the first 0.75 meters of the planned path. This distance was chosen as a compromise between average cart velocity, and continuity between picture sets. If the cart moves too far between picture digitizing sessions, the picture will change too much for reliable correlations. This is especially true if the cart turns (steers) as it moves. The image seen by the camera then pans across the field of view. The cart has a wide angle lens that covers 60° horizontally. The 0.75 meters, combined with the turning radius limit (5 meters) of the cart results in a maximum shift in the field of view of 15°, one quarter of the entire image.

This 0.75 meter segment can't be followed precisely, in general, because of dynamic limits in the cart motion. The cart can steer reliably only when it is driving. It takes a finite time for the steering motor to operate. When the drive motors are energized the robot takes a while to accelerate to its terminal velocity, and it coasts for a half meter when the motors are turned off. These complications were too difficult to model in the obstacle path planning.

Instead the program examines the cart's position and orientation at the end of the desired 0.75 meter lurch, relative to the starting position and orientation. The displacement is characterized by three parameters; displacement forward, displacement to the right and change in heading. In closed form the program computes a path that will accomplish this movement in two arcs of equal radius, but different lengths. The resulting trajectory has a general “S” shape. This closed form has three parameters; the radius of the two arcs, the distance along the first arc and the distance along the second, just the right number for a constrained solution of the desired displacement.

Making the arcs of equal radius minimizes the curvature of the planned path, a desirable goal for a vehicle that steers slowly (as well as unreliably). Even with minimized curvature, the two-arc path can only be approximated, since the steering takes a finite amount of time, during which the robot must be rolling.

I was unable to find a closed form expressing the result of simultaneous steering and driving, so the program relys on a simulation. The on and off times for the drive motor necessary to cause the cart to cover the required distance are computed analytically, as are the steering motor on times necessary to set the cart turning with the correct radii. These timings are then fed to the simulator and the final position of the cart is examined. Because the steering was not instantaneous, the simulated path usually turns out to be less curvy than the requested one. The difference between the simulated final position and orientation and the desired one is used to generate a new input for the analytic solver (To clarify; if the simulation says the cart ends up one meter too far to the right, the next iteration will request a position one meter leftward. This process works well when the results of the simulation react nearly linearly to the initial requests). About five iterations of this step are usually sufficient to find an adequate command sequence. This sequence is then transmitted, and the cart moves, more or less as simulated.

Except for the endpoints, the path generated in this way differs, in general, from the one produced by the obstacle avoider algorithm. For 0.75 meter lurches, however, it stays within a few centimeters of it. The cart avoids each obstacle by a safety factor of about a half meter, so such inaccuracies can be tolerated. In any case, the mechanical precision of the cart's response is poor enough, and its seeing sparse enough, to require such a safety margin.

Chapter 9: Evaluation

?

Many years ago I chose the line of research described herein intending to produce a combination of hardware and software by which the cart could visually navigate reliably in most environments. For a number of reasons, the existing system is only a first approximation to that youthful ideal.

One of the most serious limitations is the excruciating slowness of the program. In spite of my best efforts, and many compromises, in the interest of speed, it takes 10 to 15 minutes of real time to acquire and consider the images at each lurch, on a lightly loaded KL-10. This translates to an effective cart velocity of 3 to 5 meters an hour. Interesting obstacle courses (2 or three major obstacles, spaced far enough apart to permit passage within the limits of the cart's size and maneuverability) are at least 15 meters long, so interesting cart runs take from 3 to 5 hours, with little competition from other users, impossibly long under other conditions.

During the last few weeks of the AI lab's residence in the D.C. Power building, when the full fledged obstacle runs described here were executed, such conditions of light load were available on only some nights, between 2 and 6 AM and on some weekend mornings. The cart's video system battery lifetime on a full charge is at most 5 hours, so the limits on field tests, and consequently on the debug/improve loop, were strictly circumscribed.

Although major portions of the program had existed and been debugged for several years, the complete obstacle avoiding system (including fully working hardware, as well as programs) was not ready until two weeks before the lab's scheduled move. The first week was spent quashing unexpected trivial bugs, causing very silly cart behavior under various conditions, in the newest parts of the code, and recalibrating camera and motor response models.

The final week was devoted to serious observation (and filming) of obstacle runs. Three full (about 20 meter) runs were completed, two indoors and one outdoors. Two indoor false starts, aborted by failure of the program to perceive an obstacle, were also recorded. The two long indoor runs were nearly perfect.

In the first, the cart successfully slalomed its way around a chair, a large cardboard icosahedron, and a cardboard tree then, at a distance of about 16 meters, encountered a cluttered wall and backed up several times trying to find a way around it.

The second indoor run involved a more complicated set of obstacles, arranged primarily into two overlapping rows blocking the goal. The cart backed up twice to negotiate the tight turn required to go around the first row, then executed several steer forward / back up moves, lining itself up to go through a gap barely wide enough in the second row. This run had to be terminated, sadly, before the cart had gone through the gap because of declining battery charge and increasing system load.



Figure 9.1: A sample output from the three dimensional drawing program that inspired the construction of the ill fated cardboard trees and rocks



Figure 9.2: Gray scale output from the 3D program. See how seductive the pictures are?

The outdoor run was less successful. It began well; in the first few moves the program correctly perceived a chair directly in front of the camera, and a number of more distant cardboard obstacles and sundry debris. Unfortunately, the program's idea of the cart's own position became increasingly wrong. At almost every lurch, the position solver deduced a cart motion considerably smaller than the actual move. By the time the cart had rounded the foreground chair, its position model was so far off that the distant obstacles were replicated in different positions in the cart's confused world model, because they had been seen early in the run and again later, to the point where the program thought an actually existing distant clear path was blocked. I restarted the program to clear out the world model when the planned path became too silly. At that time the cart was four meters in front of a cardboard icosahedron, and its planned path lead straight through it. The newly re-incarnated program failed to notice the obstacle, and the cart collided with it. I manually moved the icosahedron out of the way, and allowed the run to continue. It did so uneventfully, though there were continued occasional slight errors in the self position deductions. The cart encountered a large cardboard tree towards the end of this journey and detected a portion of it only just in time to squeak by without colliding.

The two short abortive indoor runs involved setups nearly identical to the two-row successful long run described one paragraph ago. The first row, about three meters in front of the cart's starting position contained a chair, a real tree (a small cypress in a planting pot), and a polygonal cardboard tree. The cart saw the chair instantly and the real tree after the second move, but failed to see the cardboard tree ever. Its planned path around the two obstacles it did see put it on a collision course with the unseen one. Placing a chair just ahead of the cardboard tree fixed the problem, and resulted in a successful run. Never, in all my experience, has the code described in this thesis failed to notice a chair in front of the cart.

Flaws Found

These runs suggest that the system suffers from two serious weaknesses. It does not see simple polygonal (bland and featureless) objects reliably, and its visual navigation is fragile under certain conditions. Examination of the program's internal workings suggests some causes and possible solutions.

Bland Interiors

The program sometimes fails to see obstacles lacking sufficient high contrast detail within their outlines. In this regard, the polygonal tree and rock obstacles I whimsically constructed to match diagrams from a 3D drawing program, were a terrible mistake. In none of the test runs did the programs ever fail to see a chair placed in front of the cart, but half the time they did fail to see a pyramidal tree or an icosahedral rock made of clean white cardboard. These contrived obstacles were picked up reliably at a distance of 10 to 15 meters, silhouetted against a relatively unmoving (over slider travel and cart lurches) background, but were only rarely and sparsely seen at closer range, when their outlines were confused by a rapidly shifting background, and their bland interiors provided no purchase for the interest operator or correlator. Even when the artificial obstacles were correctly perceived, it was by virtue of only two to four features. In contrast, the program usually tracked five to ten features on nearby chairs.

It may seem ironic that my program does poorly in the very situations that were the only possible environment for one of its predecessors, SRI's Shakey. Shakey's environment was a large scale “blocks world”, consisting entirely of simple, uniformly colored prismatic solids. Its vision was edge based and monocular, except that it occasionally used a laser range finder to augment its model based 3D reasoning. My area correlation techniques were chosen to work in highly complex and textured “real world” surroundings. That they do poorly in blocks world contexts suggests complementarity. A combination of the two might do better than either alone.

A linking edge follower could probably find the boundary of, say, a pyramidal tree in each of two disparate pictures, even if the background had shifted severely. It could do a stereo matching by noting the topological and geometric similarities between subsets of the edge lists in the two pictures. Note that this process would not be a substitute for the area correlation used in the current program, but an augmentation of it. Edge finding is expensive and not very effective in the highly textured and detailed areas that abound in the real world, and which are area correlation's forte.

Another matching method likely to be useful in some scene areas is region growing, guided by very small scale area correlation.

In the brightly sunlit outdoor run the artificial obstacles had another problem. Their white coloration turned out to be much brighter than any “naturally” occurring extended object. These super bright, glaring, surfaces severely taxed the very limited dynamic range of the cart's vidicon/digitizer combination. When the icosahedron occupied 10% of the camera's field of view, the automatic target voltage circuit in the electronics turned down the gain to a point where the background behind the icosahedron appeared nearly solid black.

Confused Maps

The second major problem exposed by the runs is glitches in the cart's self-position model. This model is updated after a lurch by finding the 3D translation and rotation that best relates the 3d position of the set of tracked features before and after the lurch. In spite of the extensive pruning that precedes this step, (and partly because of it, as is discussed later) small errors in the measured feature positions sometimes cause the solver to converge to the wrong transform, giving a position error well beyond the expected uncertainty. Features placed into the world model before and after such a glitch will not be in the correct relative positions. Often an object seen before is seen again after, now displaced, with the combination of old and new positions combining to block a path that is in actuality open.

This problem showed up mainly in the outdoor run. I've also observed it indoors in past, in simple mapping runs, before the entire obstacle avoider was assembled. There appear to be two major causes for it, and a wide range of supporting factors.

Poor seeing, resulting in too few correct correlations between the pictures before and after a lurch, is one culprit. The highly redundant nine eyed stereo ranging is very reliable, and causes few problems, but the non-redundant correlation necessary to relate the position of features before and after a lurch, is error prone. Features which have been located in 3D from one picture ninetuplet are sought in the next set by applying the correlator between the central images of the two sets. The points so found are then ranged using nine eyed stereo in the new picture set. The cart's motion is deduced by finding the apparent 3D movement of the features from one picture set to the next.

Before this 3D co-ordinate transformation is computed, the matched points are pruned by considering their mutual three dimensional distances in the two co-ordinate systems. Accurate to the known position uncertainty of each feature, these distances should be the same in the two systems. Points that disagree in this measure with the majority of other points are rejected.

If too few points are correctly matched, because the seeing was poor, or the scene was intrinsically too bland, the pruning process can go awry. This happened several times in the outdoor run.

The outdoor scene was very taxing for the cart's vidicon. It consisted of large regions (mainly my cardboard constructions) glaring in direct sunlight, and other important regions in deep shadow. The color of the rest of the scene was in a relatively narrow central gray range. It proved impossible to simultaneously not saturate the glaring or the shadowed areas, and to get good contrast in the middle gray band, within the six bit (64 gray level) resolution of my digitized pictures. To make matters even more interesting, the program ran so slowly that the shadows moved significantly (up to a half meter) between lurches. Their high contrast boundaries were favorite points for tracking, enhancing the program's confusion.

Simple Fixes

Though elaborate (and thus far untried in our context) methods such as edge matching may greatly improve the quality of automatic vision in future, subsequent experiments with the program revealed some modest incremental improvements that would have solved most of the problems in the test runs.

The issue of unseen cardboard obstacles turns out to be partly one of over-conservatism on the program's part. In all cases where the cart collided with an obstacle it had correctly ranged a few features on the obstacle in the prior nine-eyed scan. The problem was that the much more fragile correlation between vehicle forward moves failed, and the points were rejected in the mutual distance test. Overall the nine-eyed stereo produced very few errors. If the path planning stage had used the pre-pruning features (still without incorporating them permanently into the world model) the runs would have proceeded much more smoothly. All of the most vexing false negatives, in which the program failed to spot a real obstacle, would have been eliminated. There would have been a very few false positives, in which non-existent ghost obstacles would have been perceived. One or two of these might have caused an unnecessary swerve or backup. But such ghosts would not pass the pruning stage, and the run would have proceeded normally after the initial, non-catastrophic, glitch.

The self-position confusion problem is related, and in retrospect may be considered a trivial bug. When the path planner computes a route for the cart, another subroutine takes a portion of this plan and implements it as a sequence of commands to be transmitted to the cart's steering and drive motors. During this process it runs a simulation that models the cart acceleration, rate of turning and so on, and which provides a prediction of the cart's position after the move. With the current hardware the accuracy of this prediction is not great, but it nevertheless provides much a priori information about the cart's new position. This information is used, appropriately weighted, in the least-squares co-ordinate system solver that deduces the cart's movement from the apparent motion in 3D of tracked features. It is not used, however, in the mutual distance pruning step that preceeds this solving. When the majority of features have been correctly tracked, failure to use this information does not hurt the pruning. But when the seeing is poor, it can make the difference between choosing a spuriously agreeing set of mis-tracked features and the small correctly matched set.

Incorporating the prediction into the pruning, by means of a heavily weighted point that the program treats like another tracked feature, removes almost all the positioning glitches when the program is fed the pictures from the outdoor run.

I have not attempted any live cart runs with these program changes because the cramped conditions in our new on-campus quarters make cart operations nearly impossible.

Chapter 10: Spinoffs

Graphics

The display hardware in the early days of the AI lab was strictly vector oriented; six vector terminals from Information International Inc., and a Calcomp plotter. When I arrived at Stanford the lab had just acquired a new raster based display system from Data Disc Corp. The display packages in existence at the time, of which there were several, had all started life in the vector environment, and were all oriented around vector list display descriptions. Some of the packages had been extended by addition of routines that scanned such a vector list and drew the appropriate lines in a Data Disc raster.

In my opinion, this approach had two drawbacks if raster displaying was to supplant vector drawing. The vector list is compact for simple pictures, but can grow arbitrarily large for complex pictures with many lines. A pure raster representation, on the other hand, needs a fixed amount of storage for a given raster size, independent of the complexity of the image in the array. I often saw programs using the old display packages bomb when the storage allocated for their display lists was exceeded. A second objection to vector list representations is that they had no elegant representation for some of the capabilities of raster devices not shared by vector displays, notably large filled in areas.

These thoughts prompted me to write a raster oriented display package for the Data Disc monitors that included such primitives as ellipse and polygon filling (including convex and star polygons), darkening and inversion as well as lighting of filled areas, in addition to the traditional linear commands. This package has developed a large following, and has been translated into several languages (the original package was written to run in a SAIL environment. Portions of it have been modified to run in raw assembly code, and under Lisp 1.6 \ref{W2} and Maclisp \ref{B1} \ref{L1}). It is outlined in?Appendix 10.



Figure 10.1: A silly picture produced from a GOD file. The little program which wrote the file may be found in?Appendix 10

The Data Disc package was built around a rather peculiar and inflexible raster format (for instance, the raster lines are four-way interleaved) made necessary by the nature of the Data Disc hardware. When our XGP arrived there was no easy way to extend it to handle buffers for both the Data Disc and the much higher resolution XGP, though I did add a small routine which produced a coarse XGP page by expanding each Data Disc raster pixel.

Thus the arrival of the XGP created the need for a new package which could generate high resolution XGP images from calls similar to the ones used with Data Disc. I fashioned one by modifying the innermost parts of a copy of the older routines.

New raster devices were appearing,and a single system able to handle all of them became clearly desirable. The general byte raster format used in the vision package described later in this chapter was a good medium for implementing such generality, and I once again made a modified version of the Data Disc package which this time drew into arbitrary rectangular subwindows of arbitrarily sized bit rasters. I added a few new features, such as the ability to deposit characters from XGP font files and halftone pictures into the drawing. Only partially implemented at this writing is a package which draws into byte rasters, using gray scale to produce images with reduced edge jagginess.

With both hard and soft copy graphic output devices available it became desirable to write programs which drew the same picture into buffers of different resolutions destined for different devices. Since the buffers were sometimes very large, it was reasonable to put them in separate core images. Thus evolved a message based version of the graphics routines in which the graphics main program does none of the actual drawing itself, but creates graphic “slaves” which run as separate jobs, and to which it sends messages such as “draw a line from [1.5,2.3] to [5.2,0.1].” A large number of such graphic servers can exist at any one time, and they can be individually activated and deactivated at any time. Whenever the controlling program executes a graphics primitive, all the servers currently active do the appropriate drawing. The messages sent to graphics servers can also be written into files (which can be created, destroyed, activated and deactivated just as if they were servers). Such files, which I call GOD files (for Graphics On Displays, Documents and other Devices), can be saved and used to generate drawings off-line, by manually feeding them to any of the available graphics servers. GOD files can also be used as subroutines in later drawing programs, and in more powerful ways, as suggested in the next paragraph.

A version of these routines is at the core of XGPSYN and XGPSYG, programs which can display pages from .XGP multifonted documentation files readably as gray scale images on standard TV monitors, as well as being able to list them on the XGP, after composing them as full bit rasters. XGPSYG, additionally, can insert diagrams and halftone representations of gray scale pictures into the documents being printed, in response to escape sequences occurring in the documents, pointing to GOD graphic files, hand eye format picture files or Leland Smith's music manuscript plot files (!).



Figure 10.2: Another GOD file example. This diagram was produced with the help of a circuit drawing extension to the main package

The obstacle avoider program used the message graphics system to document its internal state, generating in the process some of the diagrams seen in this thesis. The text in the thesis was formatted with Don Knuth's TEX typesetting system \ref{K1}, and printed with diagrams using XGPSYG.

Vision Software

The Hand-Eye project had collected a moderately large package of utility subroutines for acquiring and analysing pictures when I began my vision work. Unfortunately it was far from complete, even in its basics, and was built around a “global” picture representation (global variables held picture size and location parameters) which made dealing with several pictures at the same time nearly impossible, especially if they were of different sizes. This was a great handicap to me since I worked nearly from the start with hierarchies of reduced pictures, and with excerpted windows. The format also made certain primitive operations, such as individual byte accessing, unnecessarily difficult because it carried insufficient precomputed data.

In my opinion there was little worth salvaging, so I began writing my own vision primitives from scratch, starting with a data representation that included with each picture constants such as the number of words in a scanline, as well as a table of byte pointer skeletons for accessing each column. It features high speed in the utility operations, and in such fancier things as correlation and filtering. It has a clever digitizing subroutine that compensates for some of the limitations of our hardware. This package has grown over the years and is now considerably more extensive than the Hand Eye library, and has largely supplanted it in other Stanford vision projects \ref{B2}.

The obstacle avoider uses an extension of the basic package which permits convenient handling of sequences of successively reduced pictures, and of chains of windows excerpted from such sequences.

PIX is a program that uses the vision subroutines to provide desk calculator type services for pictures to a user. It can digitize pictures from various sources, transform and combine them in many ways, transfer them to and from disk, display or print them on various output devices. Among its more exotic applications has been the generation of font definitions for our printer from camera input. XGPSYN and XGPSYG also make use of the vision package. The 3D shaded graphics in this thesis were produced with a program that uses the partially implemented gray scale GOD file server, which also calls on the vision package.

Further details may be found in?Appendix 10.

與50位技術專家面對面20年技術見證,附贈技術全景圖

總結

以上是生活随笔為你收集整理的机器人学习--Hans Moravec在斯坦福博士论文1980年-Obstacle Avoidance and Navigation in the Real World by a Seeing Ro的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

老太婆性杂交欧美肥老太 | 大地资源中文第3页 | 亚洲国产精品久久久天堂 | 国产性猛交╳xxx乱大交 国产精品久久久久久无码 欧洲欧美人成视频在线 | 激情人妻另类人妻伦 | 伦伦影院午夜理论片 | 鲁大师影院在线观看 | 日日摸夜夜摸狠狠摸婷婷 | 日韩在线不卡免费视频一区 | 国精品人妻无码一区二区三区蜜柚 | 日本精品高清一区二区 | 亚洲精品国偷拍自产在线麻豆 | 国产明星裸体无码xxxx视频 | 国产精品永久免费视频 | 精品欧洲av无码一区二区三区 | 日韩亚洲欧美精品综合 | 人妻中文无码久热丝袜 | 激情内射亚州一区二区三区爱妻 | 欧洲精品码一区二区三区免费看 | 无码任你躁久久久久久久 | 大肉大捧一进一出好爽视频 | 国产后入清纯学生妹 | 在线视频网站www色 | 色婷婷综合中文久久一本 | 精品国产一区二区三区av 性色 | 国内精品人妻无码久久久影院蜜桃 | 在线视频网站www色 | 国产精品人人妻人人爽 | 国产精品永久免费视频 | 国产凸凹视频一区二区 | 国产两女互慰高潮视频在线观看 | 狂野欧美性猛交免费视频 | 国产成人一区二区三区在线观看 | 无码免费一区二区三区 | 中文字幕无码人妻少妇免费 | 欧美阿v高清资源不卡在线播放 | 丰腴饱满的极品熟妇 | 亚洲aⅴ无码成人网站国产app | 国产亚av手机在线观看 | 131美女爱做视频 | 亚洲码国产精品高潮在线 | 国产成人无码av片在线观看不卡 | 夜精品a片一区二区三区无码白浆 | 日韩精品无码一本二本三本色 | 国产9 9在线 | 中文 | 欧美亚洲国产一区二区三区 | 亚欧洲精品在线视频免费观看 | 成 人 网 站国产免费观看 | 亚洲精品一区三区三区在线观看 | 亚洲中文字幕无码一久久区 | 精品aⅴ一区二区三区 | 男女超爽视频免费播放 | 日韩av无码一区二区三区不卡 | 99精品国产综合久久久久五月天 | 欧美 丝袜 自拍 制服 另类 | 大肉大捧一进一出好爽视频 | 亚洲色大成网站www | 精品成在人线av无码免费看 | 国内精品一区二区三区不卡 | 亚洲国产精品久久久久久 | 男女作爱免费网站 | 人妻中文无码久热丝袜 | 成人三级无码视频在线观看 | 欧美喷潮久久久xxxxx | 欧美高清在线精品一区 | 夜精品a片一区二区三区无码白浆 | 国产又爽又猛又粗的视频a片 | 国产午夜亚洲精品不卡 | 夜夜躁日日躁狠狠久久av | 午夜精品一区二区三区在线观看 | 国产精品久久久久久亚洲影视内衣 | 欧美亚洲国产一区二区三区 | 精品乱子伦一区二区三区 | 中文字幕色婷婷在线视频 | 精品国产一区二区三区四区在线看 | 女人被男人爽到呻吟的视频 | 精品熟女少妇av免费观看 | 老熟妇仑乱视频一区二区 | 国产美女极度色诱视频www | a片免费视频在线观看 | 亚洲欧美色中文字幕在线 | 18精品久久久无码午夜福利 | 欧美日本精品一区二区三区 | 久久精品中文闷骚内射 | 亚洲精品中文字幕 | 美女张开腿让人桶 | 国产极品视觉盛宴 | 国产精品亚洲а∨无码播放麻豆 | 国内少妇偷人精品视频免费 | 色偷偷人人澡人人爽人人模 | а√天堂www在线天堂小说 | 国产一区二区三区日韩精品 | 亚洲欧美精品伊人久久 | 亚洲中文字幕无码中字 | 粗大的内捧猛烈进出视频 | 日本精品少妇一区二区三区 | 国产成人av免费观看 | 中文字幕乱码中文乱码51精品 | 蜜臀aⅴ国产精品久久久国产老师 | 麻豆蜜桃av蜜臀av色欲av | 四虎4hu永久免费 | 国产香蕉97碰碰久久人人 | 午夜丰满少妇性开放视频 | 国产av无码专区亚洲a∨毛片 | 国产熟女一区二区三区四区五区 | 曰韩少妇内射免费播放 | 波多野结衣乳巨码无在线观看 | 国产无遮挡吃胸膜奶免费看 | 女人被爽到呻吟gif动态图视看 | 男女超爽视频免费播放 | 在线a亚洲视频播放在线观看 | 97久久精品无码一区二区 | 日本护士xxxxhd少妇 | 久久精品丝袜高跟鞋 | 国产精品沙发午睡系列 | 国产亚av手机在线观看 | 成人无码精品1区2区3区免费看 | 成熟女人特级毛片www免费 | 国产人妻精品午夜福利免费 | 97久久精品无码一区二区 | 成人动漫在线观看 | 波多野结衣一区二区三区av免费 | 丰满护士巨好爽好大乳 | 内射后入在线观看一区 | 一本加勒比波多野结衣 | 免费乱码人妻系列无码专区 | 在线看片无码永久免费视频 | 国产精品久久久一区二区三区 | 日日摸日日碰夜夜爽av | 中文精品无码中文字幕无码专区 | 青青青手机频在线观看 | 日韩精品乱码av一区二区 | 女人色极品影院 | 呦交小u女精品视频 | 国产人妖乱国产精品人妖 | 国产成人精品视频ⅴa片软件竹菊 | 国产农村乱对白刺激视频 | 99久久婷婷国产综合精品青草免费 | 一本久久a久久精品亚洲 | 亚洲欧美日韩综合久久久 | 久久久www成人免费毛片 | 熟妇女人妻丰满少妇中文字幕 | 久久久精品人妻久久影视 | 无码人妻丰满熟妇区五十路百度 | 成人免费视频在线观看 | 黑人玩弄人妻中文在线 | 丰满人妻翻云覆雨呻吟视频 | 欧美zoozzooz性欧美 | 国内综合精品午夜久久资源 | 爱做久久久久久 | 亚洲成a人片在线观看日本 | 大地资源中文第3页 | 国产一区二区三区日韩精品 | 欧美日本免费一区二区三区 | 欧美熟妇另类久久久久久多毛 | 国产做国产爱免费视频 | 欧美日韩视频无码一区二区三 | 亚洲日本va中文字幕 | 人人爽人人澡人人高潮 | 亚洲精品成人av在线 | 亚洲天堂2017无码 | 免费中文字幕日韩欧美 | 亚洲中文字幕无码中文字在线 | 人人澡人人透人人爽 | 成年美女黄网站色大免费全看 | 亚洲天堂2017无码 | 日韩人妻无码中文字幕视频 | 免费国产成人高清在线观看网站 | 国产艳妇av在线观看果冻传媒 | 日韩av无码一区二区三区不卡 | 久久亚洲精品中文字幕无男同 | 中文字幕久久久久人妻 | 国产特级毛片aaaaaaa高清 | 人人妻人人澡人人爽欧美一区九九 | 亚洲va中文字幕无码久久不卡 | 中文字幕无码乱人伦 | 国产亚av手机在线观看 | 97夜夜澡人人双人人人喊 | 国模大胆一区二区三区 | 99精品国产综合久久久久五月天 | 亚洲乱码日产精品bd | 无码毛片视频一区二区本码 | 国产精品a成v人在线播放 | 亚洲中文字幕成人无码 | 色一情一乱一伦一视频免费看 | 东京热男人av天堂 | 男女下面进入的视频免费午夜 | 国产成人精品久久亚洲高清不卡 | 自拍偷自拍亚洲精品10p | 亚洲の无码国产の无码影院 | 亚洲国产精品毛片av不卡在线 | 国产精品办公室沙发 | 国产精品久久久久久无码 | 午夜精品一区二区三区在线观看 | 亚洲 激情 小说 另类 欧美 | 永久免费观看美女裸体的网站 | 亚洲国产综合无码一区 | 夫妻免费无码v看片 | 在线天堂新版最新版在线8 | 波多野42部无码喷潮在线 | 中文字幕乱码人妻无码久久 | 亚洲国产av美女网站 | 亚洲国产精品无码一区二区三区 | 一本精品99久久精品77 | 水蜜桃色314在线观看 | 免费网站看v片在线18禁无码 | 国产后入清纯学生妹 | 天天拍夜夜添久久精品 | 97色伦图片97综合影院 | 久久成人a毛片免费观看网站 | 乱码av麻豆丝袜熟女系列 | 精品一二三区久久aaa片 | 丰满肥臀大屁股熟妇激情视频 | 国产无遮挡又黄又爽又色 | 精品久久久中文字幕人妻 | 草草网站影院白丝内射 | 欧美精品无码一区二区三区 | 亚洲の无码国产の无码步美 | 免费无码的av片在线观看 | 少妇厨房愉情理9仑片视频 | 久久午夜无码鲁丝片 | 亚洲日韩av一区二区三区中文 | 国产成人无码一二三区视频 | 亚洲娇小与黑人巨大交 | 久久人人爽人人爽人人片ⅴ | 一区二区三区高清视频一 | 麻豆蜜桃av蜜臀av色欲av | 人妻中文无码久热丝袜 | 色情久久久av熟女人妻网站 | 国产成人无码a区在线观看视频app | 久久伊人色av天堂九九小黄鸭 | 免费看男女做好爽好硬视频 | 黑人巨大精品欧美一区二区 | 九九综合va免费看 | 在线精品国产一区二区三区 | 欧美性生交xxxxx久久久 | 99久久婷婷国产综合精品青草免费 | 丰满少妇人妻久久久久久 | 午夜男女很黄的视频 | 图片小说视频一区二区 | 自拍偷自拍亚洲精品10p | 麻豆果冻传媒2021精品传媒一区下载 | 18无码粉嫩小泬无套在线观看 | 亚洲成av人片在线观看无码不卡 | 久久 国产 尿 小便 嘘嘘 | 日韩精品无码一区二区中文字幕 | 国产精品国产自线拍免费软件 | 亚洲欧美日韩综合久久久 | 婷婷综合久久中文字幕蜜桃三电影 | 波多野结衣一区二区三区av免费 | 欧美三级a做爰在线观看 | 天天av天天av天天透 | 国产av人人夜夜澡人人爽麻豆 | 国产xxx69麻豆国语对白 | 亚洲国产高清在线观看视频 | 亚洲日本在线电影 | 久久精品国产99久久6动漫 | 少妇久久久久久人妻无码 | 在教室伦流澡到高潮hnp视频 | av人摸人人人澡人人超碰下载 | 黄网在线观看免费网站 | 乌克兰少妇性做爰 | 国产精品鲁鲁鲁 | 国产精品亚洲专区无码不卡 | 野狼第一精品社区 | 内射白嫩少妇超碰 | 97夜夜澡人人爽人人喊中国片 | 久久久精品国产sm最大网站 | 久久国产精品二国产精品 | 国产精品对白交换视频 | 亚洲男人av天堂午夜在 | 巨爆乳无码视频在线观看 | 无码午夜成人1000部免费视频 | 亚洲性无码av中文字幕 | 国产精品手机免费 | 色婷婷av一区二区三区之红樱桃 | 成人试看120秒体验区 | 又湿又紧又大又爽a视频国产 | 97久久国产亚洲精品超碰热 | 国语精品一区二区三区 | 日本熟妇浓毛 | 国产精品无码一区二区三区不卡 | 小泽玛莉亚一区二区视频在线 | 超碰97人人做人人爱少妇 | 岛国片人妻三上悠亚 | 综合人妻久久一区二区精品 | 亚洲 欧美 激情 小说 另类 | 国产人妻人伦精品1国产丝袜 | 在线a亚洲视频播放在线观看 | 国产香蕉尹人综合在线观看 | 乱码午夜-极国产极内射 | 亚洲成av人片天堂网无码】 | 成熟人妻av无码专区 | 欧美 丝袜 自拍 制服 另类 | 强辱丰满人妻hd中文字幕 | 国产精品多人p群无码 | 欧美熟妇另类久久久久久多毛 | 亚洲色无码一区二区三区 | 一本久久a久久精品亚洲 | 亚洲精品成a人在线观看 | 樱花草在线播放免费中文 | 九九在线中文字幕无码 | 亚洲人成网站免费播放 | 兔费看少妇性l交大片免费 | 国产绳艺sm调教室论坛 | 无码福利日韩神码福利片 | 三上悠亚人妻中文字幕在线 | 亚洲精品久久久久avwww潮水 | 亚洲成av人片天堂网无码】 | 亚洲国产欧美日韩精品一区二区三区 | 国产无av码在线观看 | 又大又黄又粗又爽的免费视频 | 久久精品国产99精品亚洲 | 99在线 | 亚洲 | 狂野欧美激情性xxxx | 在线欧美精品一区二区三区 | 久久久中文久久久无码 | 国产成人无码av在线影院 | 99riav国产精品视频 | 国产麻豆精品精东影业av网站 | 99视频精品全部免费免费观看 | 无遮挡国产高潮视频免费观看 | 无码任你躁久久久久久久 | 人人妻在人人 | 青草视频在线播放 | 欧美一区二区三区视频在线观看 | 强奷人妻日本中文字幕 | 久激情内射婷内射蜜桃人妖 | 亚洲欧美国产精品久久 | 国产免费观看黄av片 | 国产成人精品优优av | 国产亚洲视频中文字幕97精品 | 99久久久国产精品无码免费 | 国产乱码精品一品二品 | 国产高清不卡无码视频 | 无码人妻少妇伦在线电影 | 人妻互换免费中文字幕 | 97久久精品无码一区二区 | 99久久婷婷国产综合精品青草免费 | 亚洲日韩av一区二区三区四区 | 国产色视频一区二区三区 | 欧美怡红院免费全部视频 | 在线天堂新版最新版在线8 | 亚洲人成网站免费播放 | 一二三四在线观看免费视频 | 爽爽影院免费观看 | 亚洲综合在线一区二区三区 | 精品无码成人片一区二区98 | 野狼第一精品社区 | 亚拍精品一区二区三区探花 | 国产卡一卡二卡三 | √天堂资源地址中文在线 | 亚洲欧洲日本综合aⅴ在线 | 图片小说视频一区二区 | 欧美日韩人成综合在线播放 | 久久久久久国产精品无码下载 | 女人高潮内射99精品 | 亚洲一区二区三区含羞草 | 给我免费的视频在线观看 | 亚洲gv猛男gv无码男同 | 成人无码影片精品久久久 | 国产福利视频一区二区 | 动漫av网站免费观看 | 日本一区二区更新不卡 | 国产午夜亚洲精品不卡 | 小泽玛莉亚一区二区视频在线 | 无码av岛国片在线播放 | 精品国产aⅴ无码一区二区 | 亚洲精品国偷拍自产在线麻豆 | 亚洲日韩中文字幕在线播放 | 久久久久久a亚洲欧洲av冫 | 国产精品视频免费播放 | 狠狠cao日日穞夜夜穞av | 婷婷色婷婷开心五月四房播播 | 亚洲欧美日韩综合久久久 | 亚洲国产精品一区二区美利坚 | 久精品国产欧美亚洲色aⅴ大片 | 欧美喷潮久久久xxxxx | 国产精品无码一区二区三区不卡 | 亚洲阿v天堂在线 | 亚洲伊人久久精品影院 | 一本精品99久久精品77 | 奇米综合四色77777久久 东京无码熟妇人妻av在线网址 | 日本护士毛茸茸高潮 | 人妻aⅴ无码一区二区三区 | 久久国产精品_国产精品 | 国产无套内射久久久国产 | 中文字幕人妻丝袜二区 | 伊人久久婷婷五月综合97色 | 激情人妻另类人妻伦 | 亚洲熟女一区二区三区 | 麻豆成人精品国产免费 | 青青草原综合久久大伊人精品 | 强伦人妻一区二区三区视频18 | 国产农村妇女高潮大叫 | 精品久久久无码人妻字幂 | 日日躁夜夜躁狠狠躁 | 黑人大群体交免费视频 | 俺去俺来也在线www色官网 | 久久精品中文字幕大胸 | 色婷婷欧美在线播放内射 | 激情内射亚州一区二区三区爱妻 | 婷婷综合久久中文字幕蜜桃三电影 | 丝袜人妻一区二区三区 | 毛片内射-百度 | 九九久久精品国产免费看小说 | 亚洲第一网站男人都懂 | 亚洲成av人在线观看网址 | 黑人大群体交免费视频 | 日本一区二区三区免费高清 | 高清国产亚洲精品自在久久 | 国产精品亚洲а∨无码播放麻豆 | 亚洲最大成人网站 | 午夜精品久久久久久久久 | 国语自产偷拍精品视频偷 | 秋霞成人午夜鲁丝一区二区三区 | 国产激情精品一区二区三区 | 久久精品中文字幕大胸 | 精品久久久无码人妻字幂 | 天天燥日日燥 | 亚洲国产精华液网站w | 好男人社区资源 | 国产电影无码午夜在线播放 | 欧美成人午夜精品久久久 | 99久久久无码国产aaa精品 | 高潮毛片无遮挡高清免费视频 | 久久久精品456亚洲影院 | 亚洲精品成人福利网站 | 一本无码人妻在中文字幕免费 | 国产精品久久久久久亚洲影视内衣 | 亚洲综合无码一区二区三区 | 欧美黑人性暴力猛交喷水 | 国产欧美精品一区二区三区 | 亚洲人亚洲人成电影网站色 | 色欲久久久天天天综合网精品 | 国产人妻精品一区二区三区 | 狠狠色欧美亚洲狠狠色www | 国产莉萝无码av在线播放 | 无套内谢老熟女 | 亚洲成熟女人毛毛耸耸多 | 欧美freesex黑人又粗又大 | 国产高潮视频在线观看 | 理论片87福利理论电影 | 无套内谢的新婚少妇国语播放 | 日本精品人妻无码免费大全 | 网友自拍区视频精品 | 国色天香社区在线视频 | 亚洲无人区一区二区三区 | 亚洲中文字幕av在天堂 | 久久亚洲国产成人精品性色 | 亚洲国产精品美女久久久久 | 丰满少妇女裸体bbw | 亚洲中文字幕在线观看 | 中文字幕人妻无码一夲道 | 99久久精品无码一区二区毛片 | 久青草影院在线观看国产 | 精品国产福利一区二区 | 高清国产亚洲精品自在久久 | 久久精品99久久香蕉国产色戒 | 又紧又大又爽精品一区二区 | 日本高清一区免费中文视频 | 桃花色综合影院 | 老太婆性杂交欧美肥老太 | 欧洲精品码一区二区三区免费看 | 日日天日日夜日日摸 | 成年女人永久免费看片 | 麻豆精品国产精华精华液好用吗 | 国产熟妇另类久久久久 | 精品久久8x国产免费观看 | 国产精品无码久久av | 88国产精品欧美一区二区三区 | 久久久久国色av免费观看性色 | 性啪啪chinese东北女人 | 午夜精品久久久内射近拍高清 | √天堂中文官网8在线 | 性生交大片免费看l | www国产亚洲精品久久网站 | 久久无码中文字幕免费影院蜜桃 | 亚洲日韩av一区二区三区中文 | a在线亚洲男人的天堂 | 在线天堂新版最新版在线8 | 偷窥日本少妇撒尿chinese | 久久久中文久久久无码 | 夜夜影院未满十八勿进 | 国产精品久久久av久久久 | 国产精品无码mv在线观看 | 荫蒂被男人添的好舒服爽免费视频 | 东京一本一道一二三区 | 无码国产色欲xxxxx视频 | 日本免费一区二区三区最新 | 四虎4hu永久免费 | 99精品无人区乱码1区2区3区 | 永久免费观看美女裸体的网站 | 又大又硬又爽免费视频 | 色偷偷人人澡人人爽人人模 | 久久视频在线观看精品 | 无码人妻久久一区二区三区不卡 | 内射巨臀欧美在线视频 | 最近中文2019字幕第二页 | 久久综合激激的五月天 | √天堂资源地址中文在线 | 日产国产精品亚洲系列 | 无码午夜成人1000部免费视频 | 久久综合狠狠综合久久综合88 | 午夜无码人妻av大片色欲 | 欧美35页视频在线观看 | 无码人妻精品一区二区三区不卡 | 久久精品国产亚洲精品 | 日韩精品一区二区av在线 | 亚洲国产精品久久人人爱 | 人人妻人人澡人人爽人人精品浪潮 | 性欧美疯狂xxxxbbbb | 国产sm调教视频在线观看 | 十八禁视频网站在线观看 | 国产性猛交╳xxx乱大交 国产精品久久久久久无码 欧洲欧美人成视频在线 | 99久久精品国产一区二区蜜芽 | 国产精品久久久一区二区三区 | 久青草影院在线观看国产 | 最新版天堂资源中文官网 | 亚洲呦女专区 | 久久久久久av无码免费看大片 | 亚洲日本在线电影 | 日韩欧美中文字幕在线三区 | 亚洲一区二区三区偷拍女厕 | 高清国产亚洲精品自在久久 | 免费看男女做好爽好硬视频 | 少妇人妻大乳在线视频 | 中文字幕亚洲情99在线 | v一区无码内射国产 | 国产情侣作爱视频免费观看 | 国产人妻精品一区二区三区不卡 | 亚洲国产av精品一区二区蜜芽 | 亚洲成a人片在线观看日本 | 成在人线av无码免观看麻豆 | 天天躁夜夜躁狠狠是什么心态 | 亚洲国产精品一区二区美利坚 | a在线亚洲男人的天堂 | 成年美女黄网站色大免费视频 | 中文字幕日韩精品一区二区三区 | 人妻与老人中文字幕 | 人妻少妇精品无码专区动漫 | 国产成人精品优优av | 狠狠色噜噜狠狠狠狠7777米奇 | 美女毛片一区二区三区四区 | 日本一卡2卡3卡四卡精品网站 | 丰满人妻被黑人猛烈进入 | 牲欲强的熟妇农村老妇女视频 | 在线观看国产午夜福利片 | 欧美日韩精品 | 欧美freesex黑人又粗又大 | 国产在线精品一区二区高清不卡 | 欧洲极品少妇 | 丰满少妇人妻久久久久久 | 国产精品99久久精品爆乳 | 日韩欧美中文字幕在线三区 | 日韩视频 中文字幕 视频一区 | 成人免费视频一区二区 | 俺去俺来也在线www色官网 | 久久精品国产一区二区三区肥胖 | 少妇无码吹潮 | 亚洲精品国产精品乱码视色 | 小sao货水好多真紧h无码视频 | 国产在线精品一区二区三区直播 | 在线精品国产一区二区三区 | 精品人妻人人做人人爽夜夜爽 | 少妇人妻av毛片在线看 | 又湿又紧又大又爽a视频国产 | 色欲人妻aaaaaaa无码 | 国产人妻精品午夜福利免费 | 东京热一精品无码av | 男女猛烈xx00免费视频试看 | 色一情一乱一伦一视频免费看 | 亚洲综合无码久久精品综合 | 人人澡人人妻人人爽人人蜜桃 | 亚洲人成网站免费播放 | 狠狠色欧美亚洲狠狠色www | 久久视频在线观看精品 | 精品国偷自产在线视频 | 久久人人爽人人爽人人片av高清 | 色一情一乱一伦一区二区三欧美 | 欧美性猛交内射兽交老熟妇 | 纯爱无遮挡h肉动漫在线播放 | 亚洲精品鲁一鲁一区二区三区 | 精品一区二区不卡无码av | 国产福利视频一区二区 | 亚洲成a人片在线观看日本 | 99在线 | 亚洲 | 四虎永久在线精品免费网址 | 亚洲娇小与黑人巨大交 | 97无码免费人妻超级碰碰夜夜 | 自拍偷自拍亚洲精品被多人伦好爽 | 又紧又大又爽精品一区二区 | 欧美精品无码一区二区三区 | 国内揄拍国内精品人妻 | 国产精品福利视频导航 | 欧美怡红院免费全部视频 | 荫蒂被男人添的好舒服爽免费视频 | 国产精品久久久久9999小说 | 国产成人精品视频ⅴa片软件竹菊 | 国产乱人伦av在线无码 | 日本一区二区三区免费播放 | 牛和人交xxxx欧美 | 精品久久综合1区2区3区激情 | 成人女人看片免费视频放人 | 野外少妇愉情中文字幕 | 疯狂三人交性欧美 | 综合激情五月综合激情五月激情1 | 亚洲色www成人永久网址 | 日本一区二区三区免费播放 | 又大又紧又粉嫩18p少妇 | 国产97色在线 | 免 | 在线精品国产一区二区三区 | 午夜性刺激在线视频免费 | 无码免费一区二区三区 | 99精品国产综合久久久久五月天 | 亚洲狠狠婷婷综合久久 | 国产一区二区不卡老阿姨 | 粗大的内捧猛烈进出视频 | 一本色道久久综合狠狠躁 | 一本精品99久久精品77 | 色婷婷久久一区二区三区麻豆 | 久久精品女人的天堂av | 日韩欧美成人免费观看 | 亚洲色在线无码国产精品不卡 | 无码乱肉视频免费大全合集 | 久久国产劲爆∧v内射 | 东京热一精品无码av | 亚洲精品一区三区三区在线观看 | 亚洲欧洲中文日韩av乱码 | 亚洲精品一区二区三区在线 | 中文久久乱码一区二区 | 老熟妇仑乱视频一区二区 | 亚洲欧洲日本无在线码 | 久久久成人毛片无码 | 综合人妻久久一区二区精品 | 激情内射亚州一区二区三区爱妻 | 国产福利视频一区二区 | 久久久久se色偷偷亚洲精品av | 久久久久久久女国产乱让韩 | 欧美午夜特黄aaaaaa片 | 精品无码一区二区三区的天堂 | 国产午夜福利亚洲第一 | 久久综合狠狠综合久久综合88 | 亚洲爆乳精品无码一区二区三区 | 人妻插b视频一区二区三区 | 精品熟女少妇av免费观看 | 俺去俺来也www色官网 | 18禁黄网站男男禁片免费观看 | 欧美乱妇无乱码大黄a片 | 久久精品99久久香蕉国产色戒 | 18无码粉嫩小泬无套在线观看 | 欧美丰满老熟妇xxxxx性 | 亚洲熟妇自偷自拍另类 | 中国女人内谢69xxxxxa片 | 2020久久超碰国产精品最新 | 亚洲男人av天堂午夜在 | 樱花草在线社区www | 国产成人精品优优av | 国产色在线 | 国产 | 三上悠亚人妻中文字幕在线 | 熟妇激情内射com | 国产精品鲁鲁鲁 | 国内丰满熟女出轨videos | 东京无码熟妇人妻av在线网址 | 久久精品中文字幕大胸 | 久久久久久久人妻无码中文字幕爆 | 成人免费视频一区二区 | 东京无码熟妇人妻av在线网址 | а天堂中文在线官网 | 大乳丰满人妻中文字幕日本 | 色综合久久88色综合天天 | 国产成人人人97超碰超爽8 | 一本久道久久综合狠狠爱 | 性色欲网站人妻丰满中文久久不卡 | 欧美亚洲国产一区二区三区 | 玩弄人妻少妇500系列视频 | 国产精品久久久久久久9999 | 青春草在线视频免费观看 | 欧美人与动性行为视频 | 国产成人久久精品流白浆 | 国产婷婷色一区二区三区在线 | 国产明星裸体无码xxxx视频 | 人妻插b视频一区二区三区 | 内射巨臀欧美在线视频 | 国色天香社区在线视频 | 亚洲国产精品一区二区第一页 | 午夜丰满少妇性开放视频 | 日本一卡二卡不卡视频查询 | 精品一区二区不卡无码av | 亚洲日韩av一区二区三区四区 | 国产精品办公室沙发 | 18禁止看的免费污网站 | 日本乱人伦片中文三区 | 国产成人久久精品流白浆 | 大肉大捧一进一出视频出来呀 | 夜夜影院未满十八勿进 | 国产精品a成v人在线播放 | 亚洲а∨天堂久久精品2021 | 丰满岳乱妇在线观看中字无码 | 国产明星裸体无码xxxx视频 | 伊人久久大香线蕉亚洲 | 日韩人妻无码中文字幕视频 | 免费乱码人妻系列无码专区 | 正在播放东北夫妻内射 | 亚洲熟女一区二区三区 | 精品成在人线av无码免费看 | 成人一区二区免费视频 | 成人试看120秒体验区 | 女人被男人爽到呻吟的视频 | 激情五月综合色婷婷一区二区 | 欧美人与物videos另类 | 国产亚洲精品精品国产亚洲综合 | 久久亚洲精品中文字幕无男同 | 黄网在线观看免费网站 | 国产av一区二区三区最新精品 | 四虎永久在线精品免费网址 | 成人一区二区免费视频 | 四十如虎的丰满熟妇啪啪 | 国产欧美精品一区二区三区 | 精品国产国产综合精品 | 无码国产色欲xxxxx视频 | 国产特级毛片aaaaaa高潮流水 | 国产又爽又黄又刺激的视频 | 黑人巨大精品欧美一区二区 | 高中生自慰www网站 | 久久亚洲精品中文字幕无男同 | 亚洲成av人影院在线观看 | 精品一区二区不卡无码av | 天天摸天天碰天天添 | 欧美一区二区三区视频在线观看 | 国语自产偷拍精品视频偷 | 55夜色66夜色国产精品视频 | 国产农村乱对白刺激视频 | 少妇愉情理伦片bd | 国产精品99爱免费视频 | 国产高潮视频在线观看 | 国产99久久精品一区二区 | 欧美性猛交xxxx富婆 | aⅴ在线视频男人的天堂 | 在线观看免费人成视频 | 97资源共享在线视频 | 亚洲国产成人a精品不卡在线 | 国产激情无码一区二区app | 澳门永久av免费网站 | 性欧美熟妇videofreesex | 性色欲情网站iwww九文堂 | 正在播放东北夫妻内射 | 亚洲一区二区三区香蕉 | 欧美喷潮久久久xxxxx | 国产乱人伦app精品久久 国产在线无码精品电影网 国产国产精品人在线视 | 日本肉体xxxx裸交 | 99视频精品全部免费免费观看 | 久久久久久a亚洲欧洲av冫 | 少妇无套内谢久久久久 | 熟妇人妻无码xxx视频 | 麻花豆传媒剧国产免费mv在线 | 国产成人无码av片在线观看不卡 | 国产精品高潮呻吟av久久4虎 | 天天躁日日躁狠狠躁免费麻豆 | 国产成人精品久久亚洲高清不卡 | 377p欧洲日本亚洲大胆 | 天天拍夜夜添久久精品 | 美女张开腿让人桶 | 国产精品亚洲а∨无码播放麻豆 | 国产精品视频免费播放 | 国精品人妻无码一区二区三区蜜柚 | 亚洲欧美日韩成人高清在线一区 | 日韩av无码一区二区三区 | 免费网站看v片在线18禁无码 | 婷婷六月久久综合丁香 | 丰满护士巨好爽好大乳 | 人人妻在人人 | 2020最新国产自产精品 | 无遮挡啪啪摇乳动态图 | 国产精品爱久久久久久久 | 国产精品内射视频免费 | 久久精品女人的天堂av | 美女黄网站人色视频免费国产 | 亚洲人成网站在线播放942 | 欧美熟妇另类久久久久久多毛 | 一区二区三区乱码在线 | 欧洲 | 无码精品人妻一区二区三区av | 日韩av无码一区二区三区不卡 | 国产色在线 | 国产 | 97精品人妻一区二区三区香蕉 | 久久 国产 尿 小便 嘘嘘 | 在线天堂新版最新版在线8 | 国产明星裸体无码xxxx视频 | 日本护士毛茸茸高潮 | 精品一区二区三区无码免费视频 | 国产精品久久久久影院嫩草 | 波多野结衣av一区二区全免费观看 | 成人无码精品一区二区三区 | 婷婷六月久久综合丁香 | 国产婷婷色一区二区三区在线 | 特级做a爰片毛片免费69 | 日本精品人妻无码77777 天堂一区人妻无码 | 乌克兰少妇性做爰 | 免费无码午夜福利片69 | 亚洲国产精品美女久久久久 | 欧美人与禽猛交狂配 | 欧美人与物videos另类 | 精品一区二区三区无码免费视频 | 巨爆乳无码视频在线观看 | 亚洲乱码中文字幕在线 | 成人无码视频免费播放 | 精品久久综合1区2区3区激情 | 国内综合精品午夜久久资源 | 欧美国产亚洲日韩在线二区 | 久久久中文字幕日本无吗 | 国产麻豆精品一区二区三区v视界 | 无码帝国www无码专区色综合 | 无码国产色欲xxxxx视频 | 亚洲色欲色欲天天天www | 成人无码视频免费播放 | 日本饥渴人妻欲求不满 | 少妇人妻av毛片在线看 | 亚洲第一无码av无码专区 | 精品无人区无码乱码毛片国产 | 人人澡人人妻人人爽人人蜜桃 | 国产精品va在线观看无码 | 久久久久成人片免费观看蜜芽 | 少妇被黑人到高潮喷出白浆 | 久久国产精品偷任你爽任你 | 国产亚洲欧美日韩亚洲中文色 | 亚洲人成网站免费播放 | 乌克兰少妇性做爰 | 亚洲国产精品一区二区美利坚 | 婷婷综合久久中文字幕蜜桃三电影 | 丝袜足控一区二区三区 | 日韩精品成人一区二区三区 | 精品国产一区av天美传媒 | 中文字幕人妻无码一区二区三区 | 欧美日韩一区二区综合 | 中文精品无码中文字幕无码专区 | 人妻互换免费中文字幕 | 国产人妻大战黑人第1集 | 亚洲日韩av一区二区三区中文 | 色偷偷av老熟女 久久精品人妻少妇一区二区三区 | 亚洲国产精品无码久久久久高潮 | 亚洲日韩av片在线观看 | a片在线免费观看 | 亚拍精品一区二区三区探花 | 亚洲の无码国产の无码步美 | 丝袜 中出 制服 人妻 美腿 | 一本色道久久综合亚洲精品不卡 | 国产av一区二区三区最新精品 | 东京热一精品无码av | 精品国产一区二区三区av 性色 | 性欧美牲交xxxxx视频 | 麻豆果冻传媒2021精品传媒一区下载 | 人妻无码αv中文字幕久久琪琪布 | 成人精品视频一区二区三区尤物 | 久久亚洲中文字幕精品一区 | 中文字幕人妻无码一区二区三区 | 性欧美牲交xxxxx视频 | 99re在线播放 | 国内少妇偷人精品视频免费 | 九九热爱视频精品 | 亚洲无人区午夜福利码高清完整版 | 日本xxxx色视频在线观看免费 | 国产舌乚八伦偷品w中 | 亚洲国产精品久久人人爱 | 国产午夜福利亚洲第一 | 日本大香伊一区二区三区 | 久久久久久久久蜜桃 | 亚洲欧美日韩成人高清在线一区 | 亚欧洲精品在线视频免费观看 | 丰满少妇人妻久久久久久 | 色窝窝无码一区二区三区色欲 | 国产成人精品必看 | 在线а√天堂中文官网 | 99精品无人区乱码1区2区3区 | 国产绳艺sm调教室论坛 | 老子影院午夜伦不卡 | 国产美女极度色诱视频www | 国产亚洲精品久久久久久国模美 | 国内综合精品午夜久久资源 | 欧美xxxx黑人又粗又长 | 成人免费无码大片a毛片 | 中文字幕乱码亚洲无线三区 | 精品国产成人一区二区三区 | 99国产欧美久久久精品 | 一本色道久久综合狠狠躁 | 久久精品国产99精品亚洲 | 无套内射视频囯产 | 婷婷丁香五月天综合东京热 | 国产精品沙发午睡系列 | 国产偷自视频区视频 | 亚洲一区二区三区香蕉 | 国产片av国语在线观看 | 中文久久乱码一区二区 | 未满小14洗澡无码视频网站 | 任你躁国产自任一区二区三区 | 熟女少妇在线视频播放 | 欧美变态另类xxxx | 久久久久免费精品国产 | 性做久久久久久久免费看 | 久久人人97超碰a片精品 | 久久综合给久久狠狠97色 | 人妻与老人中文字幕 | 久久99精品国产.久久久久 | 亚洲综合精品香蕉久久网 | 亚洲一区二区三区在线观看网站 | 国产精品多人p群无码 | 国产凸凹视频一区二区 | 学生妹亚洲一区二区 | 亚洲伊人久久精品影院 | 国产精品va在线播放 | 无码人妻丰满熟妇区毛片18 | 在线观看欧美一区二区三区 | 国产三级精品三级男人的天堂 | 亚洲中文字幕无码中字 | 久久久久成人精品免费播放动漫 | 国产无套粉嫩白浆在线 | 欧洲欧美人成视频在线 | 国产av一区二区三区最新精品 | 正在播放老肥熟妇露脸 | 亚洲va中文字幕无码久久不卡 | 少妇无码一区二区二三区 | 熟女体下毛毛黑森林 | 亚洲日本va中文字幕 | 亚洲欧美色中文字幕在线 | 日本精品少妇一区二区三区 | 狠狠色噜噜狠狠狠7777奇米 | 久久亚洲日韩精品一区二区三区 | 精品国产国产综合精品 | 国产成人一区二区三区别 | 精品久久综合1区2区3区激情 | 欧美人与善在线com | 国产成人无码av一区二区 | aⅴ亚洲 日韩 色 图网站 播放 | 久久久久成人精品免费播放动漫 | 亚洲啪av永久无码精品放毛片 | 无码成人精品区在线观看 | 国产精品美女久久久 | 国产精品人人妻人人爽 | 亚洲国产成人a精品不卡在线 | 最新国产麻豆aⅴ精品无码 | 精品国产精品久久一区免费式 | 国产成人无码一二三区视频 | 99久久久无码国产aaa精品 | 亚洲 a v无 码免 费 成 人 a v | 亚洲啪av永久无码精品放毛片 | 图片区 小说区 区 亚洲五月 | а天堂中文在线官网 | 亚洲娇小与黑人巨大交 | 天堂一区人妻无码 | 久久人妻内射无码一区三区 | 国产亚洲日韩欧美另类第八页 | а√天堂www在线天堂小说 | 午夜性刺激在线视频免费 | 亚洲日韩av片在线观看 | 亚洲国产精品一区二区第一页 | 亚洲乱亚洲乱妇50p | 国产精华av午夜在线观看 | 久久精品一区二区三区四区 | www国产亚洲精品久久网站 | 一本无码人妻在中文字幕免费 | 欧美日韩一区二区三区自拍 | 色一情一乱一伦一区二区三欧美 | 狠狠色丁香久久婷婷综合五月 | 国产成人无码a区在线观看视频app | 2019nv天堂香蕉在线观看 | 精品少妇爆乳无码av无码专区 | 99久久精品日本一区二区免费 | 国产av无码专区亚洲a∨毛片 | 天天拍夜夜添久久精品 | 久久国产精品偷任你爽任你 | 国产精品国产自线拍免费软件 | 青草视频在线播放 | 日日碰狠狠躁久久躁蜜桃 | 日日摸天天摸爽爽狠狠97 | 欧美xxxx黑人又粗又长 | 成 人 网 站国产免费观看 | 成在人线av无码免观看麻豆 | 日日碰狠狠丁香久燥 | www国产精品内射老师 | 亚洲娇小与黑人巨大交 | aa片在线观看视频在线播放 | 久久久久se色偷偷亚洲精品av | 国产成人无码av片在线观看不卡 | 欧美黑人性暴力猛交喷水 | 色婷婷香蕉在线一区二区 | 日本高清一区免费中文视频 | 欧美黑人乱大交 | 人人爽人人澡人人人妻 | 鲁鲁鲁爽爽爽在线视频观看 | 国产疯狂伦交大片 | 国产精品va在线观看无码 | 精品国偷自产在线 | 亚洲欧美国产精品专区久久 | 成 人影片 免费观看 | 在线亚洲高清揄拍自拍一品区 | 呦交小u女精品视频 | 久久综合给合久久狠狠狠97色 | 亚洲午夜福利在线观看 | 无码国产乱人伦偷精品视频 | 久久久久亚洲精品中文字幕 | 国精产品一品二品国精品69xx | 久久精品99久久香蕉国产色戒 | 日本又色又爽又黄的a片18禁 | 国产片av国语在线观看 | 内射白嫩少妇超碰 | 久在线观看福利视频 | 中文无码成人免费视频在线观看 | 国产精品无码永久免费888 | 亚洲一区二区三区播放 | 欧美人与动性行为视频 | 久久无码中文字幕免费影院蜜桃 | 成熟妇人a片免费看网站 | 精品aⅴ一区二区三区 | 国产成人精品三级麻豆 | 麻豆av传媒蜜桃天美传媒 | 内射老妇bbwx0c0ck | 97色伦图片97综合影院 | 久久精品视频在线看15 | 人妻插b视频一区二区三区 | 内射欧美老妇wbb | 亚洲va欧美va天堂v国产综合 | 亚洲人亚洲人成电影网站色 | 好男人社区资源 | 蜜桃视频韩日免费播放 | 精品无码国产一区二区三区av | 亚洲gv猛男gv无码男同 | 色综合久久网 | 99久久人妻精品免费一区 | 最新版天堂资源中文官网 | 国产成人精品三级麻豆 | 中文字幕乱妇无码av在线 | 欧美精品国产综合久久 | 一本一道久久综合久久 | 国产麻豆精品一区二区三区v视界 | 亚洲精品无码人妻无码 | 帮老师解开蕾丝奶罩吸乳网站 | 2019nv天堂香蕉在线观看 | 爱做久久久久久 | 久久亚洲日韩精品一区二区三区 | 亚洲乱码国产乱码精品精 | 国产亚洲视频中文字幕97精品 | 国产精品美女久久久久av爽李琼 | 久久天天躁狠狠躁夜夜免费观看 | 国产超级va在线观看视频 | 国产午夜福利亚洲第一 | 午夜无码区在线观看 | 国产区女主播在线观看 | 国产激情艳情在线看视频 | 亚洲精品中文字幕乱码 | 麻豆国产人妻欲求不满 | 国产女主播喷水视频在线观看 | 国内揄拍国内精品人妻 | 性色欲网站人妻丰满中文久久不卡 | 久久久久久亚洲精品a片成人 | 久久成人a毛片免费观看网站 | 一本久道高清无码视频 | 久久99热只有频精品8 | 国产农村妇女aaaaa视频 撕开奶罩揉吮奶头视频 | 色综合视频一区二区三区 | 99精品国产综合久久久久五月天 | 一本久久a久久精品vr综合 | 国产片av国语在线观看 | 国产成人综合在线女婷五月99播放 | 久久久久久九九精品久 | 丰满人妻翻云覆雨呻吟视频 | 久久精品国产一区二区三区 | 日日天干夜夜狠狠爱 | 久久婷婷五月综合色国产香蕉 | 亚洲色大成网站www | 亚洲成色在线综合网站 | 麻豆国产人妻欲求不满 | 午夜肉伦伦影院 | 亚洲国产av精品一区二区蜜芽 | 国内精品人妻无码久久久影院 | 国产人妻久久精品二区三区老狼 | 清纯唯美经典一区二区 | www国产亚洲精品久久网站 | 国产激情综合五月久久 | 亚洲色欲色欲天天天www | 国产做国产爱免费视频 | 中文字幕 亚洲精品 第1页 | 亚洲欧洲无卡二区视頻 | 欧美猛少妇色xxxxx | 一本久道久久综合婷婷五月 | 一本色道久久综合狠狠躁 | 日本www一道久久久免费榴莲 | 麻豆精产国品 | 久久久久se色偷偷亚洲精品av | 亚洲日韩一区二区 | 亚洲日本一区二区三区在线 | 欧美自拍另类欧美综合图片区 | 亚洲欧美精品aaaaaa片 | 国产精品久久精品三级 | 日本乱偷人妻中文字幕 | 中文字幕av伊人av无码av | 中文字幕亚洲情99在线 | 久久精品国产精品国产精品污 | 国产性猛交╳xxx乱大交 国产精品久久久久久无码 欧洲欧美人成视频在线 | 又粗又大又硬毛片免费看 | 又大又硬又爽免费视频 | 亚洲精品一区三区三区在线观看 | 在线观看国产一区二区三区 | 中文字幕无码热在线视频 | 久久天天躁狠狠躁夜夜免费观看 | 国产两女互慰高潮视频在线观看 | 国产做国产爱免费视频 | 国产亚洲精品久久久ai换 | 国内少妇偷人精品视频 | 波多野结衣av一区二区全免费观看 | 爆乳一区二区三区无码 | 波多野结衣aⅴ在线 | 亚洲狠狠色丁香婷婷综合 | 无码帝国www无码专区色综合 | 露脸叫床粗话东北少妇 | 中文字幕中文有码在线 | 久久久久久久久蜜桃 | 亚洲另类伦春色综合小说 | 国产免费久久精品国产传媒 | 精品久久久无码中文字幕 | 欧美老妇交乱视频在线观看 | aⅴ在线视频男人的天堂 | 亚欧洲精品在线视频免费观看 | 又黄又爽又色的视频 | 国产午夜手机精彩视频 | 国产精品亚洲lv粉色 | 欧美成人高清在线播放 | 久久精品99久久香蕉国产色戒 | 中文字幕乱码人妻二区三区 | 中文无码成人免费视频在线观看 | 亚洲中文字幕无码一久久区 | 无码人妻精品一区二区三区下载 | 欧美熟妇另类久久久久久多毛 | 国产精品人妻一区二区三区四 | 香蕉久久久久久av成人 | 亚洲成av人片天堂网无码】 | 免费国产成人高清在线观看网站 | 亚洲综合伊人久久大杳蕉 | 亚洲男人av天堂午夜在 | 日本精品久久久久中文字幕 | 在线观看欧美一区二区三区 | 国产精品igao视频网 | 精品国产福利一区二区 | 美女黄网站人色视频免费国产 | 少妇厨房愉情理9仑片视频 | 久久国产精品精品国产色婷婷 | 亚洲小说春色综合另类 | 国产乱人伦偷精品视频 | 99久久无码一区人妻 | ass日本丰满熟妇pics | 久久99精品国产.久久久久 | 好爽又高潮了毛片免费下载 | 九一九色国产 | 欧美性猛交xxxx富婆 | 东京无码熟妇人妻av在线网址 | 日本乱偷人妻中文字幕 | 一个人看的www免费视频在线观看 | 亚洲一区二区三区无码久久 | 国产艳妇av在线观看果冻传媒 | 国产乱人伦av在线无码 | 久久久中文字幕日本无吗 | 午夜无码人妻av大片色欲 | 欧美freesex黑人又粗又大 | 波多野结衣av在线观看 | 日本一卡二卡不卡视频查询 | 亚洲人亚洲人成电影网站色 | 亚洲欧美国产精品专区久久 | 在线欧美精品一区二区三区 | 精品偷拍一区二区三区在线看 | 国产精品丝袜黑色高跟鞋 | 亚洲国产精品无码久久久久高潮 | 性色欲情网站iwww九文堂 | 久久99精品国产麻豆 | 久久精品国产日本波多野结衣 | 国产无遮挡又黄又爽免费视频 | 久久久成人毛片无码 | 精品久久久久久人妻无码中文字幕 | 少妇性l交大片欧洲热妇乱xxx | 色窝窝无码一区二区三区色欲 | 国产xxx69麻豆国语对白 | 久久伊人色av天堂九九小黄鸭 | 日韩在线不卡免费视频一区 | 国产精品爱久久久久久久 | 精品一区二区不卡无码av | 久9re热视频这里只有精品 | 国产亚洲精品久久久久久 | 欧美 日韩 亚洲 在线 | 丰满人妻精品国产99aⅴ | 国产精品亚洲一区二区三区喷水 | 欧美日本精品一区二区三区 | 熟妇女人妻丰满少妇中文字幕 | 欧美老熟妇乱xxxxx | 蜜桃臀无码内射一区二区三区 | 性色av无码免费一区二区三区 | 人人爽人人爽人人片av亚洲 | 欧洲熟妇精品视频 | 国产内射爽爽大片视频社区在线 | 亚洲人成人无码网www国产 | 中文字幕无线码免费人妻 | 日本免费一区二区三区最新 | 欧美激情内射喷水高潮 | 曰本女人与公拘交酡免费视频 | 丰满护士巨好爽好大乳 | 亚洲欧美日韩国产精品一区二区 | 国产av无码专区亚洲a∨毛片 | 亚洲 欧美 激情 小说 另类 | 蜜桃无码一区二区三区 | 超碰97人人做人人爱少妇 | 我要看www免费看插插视频 | 蜜桃视频插满18在线观看 | 色婷婷香蕉在线一区二区 | 国产综合在线观看 | 久久久久国色av免费观看性色 | 欧美怡红院免费全部视频 | 亚洲精品一区二区三区在线 | 荫蒂添的好舒服视频囗交 | 又黄又爽又色的视频 | 亚洲七七久久桃花影院 | 午夜精品一区二区三区在线观看 | 精品国产一区二区三区四区在线看 | 人人澡人人妻人人爽人人蜜桃 | 国内精品久久毛片一区二区 | 性色欲情网站iwww九文堂 | 久久久精品国产sm最大网站 | 欧美自拍另类欧美综合图片区 | 国产手机在线αⅴ片无码观看 | 一区二区三区乱码在线 | 欧洲 | 99久久人妻精品免费一区 | 亚洲中文字幕在线观看 | 久久久精品欧美一区二区免费 | 水蜜桃亚洲一二三四在线 | 人妻少妇精品视频专区 | 亚洲精品国产品国语在线观看 | 亚洲 高清 成人 动漫 | ass日本丰满熟妇pics | 国产午夜福利亚洲第一 | 亚洲精品成人av在线 | 日本欧美一区二区三区乱码 | 久精品国产欧美亚洲色aⅴ大片 | a国产一区二区免费入口 | 国产黑色丝袜在线播放 | 国内精品人妻无码久久久影院蜜桃 | 娇妻被黑人粗大高潮白浆 | 日日天干夜夜狠狠爱 | 色婷婷欧美在线播放内射 | 少妇人妻大乳在线视频 | 久热国产vs视频在线观看 | 亚洲熟妇色xxxxx欧美老妇 | 少女韩国电视剧在线观看完整 | 日日摸夜夜摸狠狠摸婷婷 | 成人免费视频一区二区 | 一本久道久久综合婷婷五月 | √天堂中文官网8在线 | 国産精品久久久久久久 | 欧美zoozzooz性欧美 | 久久久国产精品无码免费专区 | 暴力强奷在线播放无码 | 亚洲成av人片在线观看无码不卡 | 亚欧洲精品在线视频免费观看 | 乱人伦中文视频在线观看 | 精品人妻人人做人人爽 | 久久综合给久久狠狠97色 | 国产97人人超碰caoprom | 1000部啪啪未满十八勿入下载 | 无码播放一区二区三区 | 亚洲精品国产品国语在线观看 | 99久久精品无码一区二区毛片 | 国产成人人人97超碰超爽8 | 欧美亚洲国产一区二区三区 | 日韩 欧美 动漫 国产 制服 | 国产精华av午夜在线观看 | 人妻无码久久精品人妻 | 久久久久久久人妻无码中文字幕爆 | 亚洲а∨天堂久久精品2021 | 粉嫩少妇内射浓精videos | 日韩精品a片一区二区三区妖精 | 午夜理论片yy44880影院 | 中文字幕人妻无码一区二区三区 | 国产亚洲精品久久久久久国模美 | 亚洲乱码国产乱码精品精 | 麻豆精品国产精华精华液好用吗 | 精品亚洲成av人在线观看 | 97久久国产亚洲精品超碰热 | 九九综合va免费看 | 四虎永久在线精品免费网址 | 亚洲欧美国产精品久久 | 少妇高潮喷潮久久久影院 | 日韩精品无码一区二区中文字幕 | 欧美丰满少妇xxxx性 | 99精品国产综合久久久久五月天 | 无码人妻少妇伦在线电影 | 亚洲精品国产精品乱码不卡 | 在线天堂新版最新版在线8 | 丁香花在线影院观看在线播放 | 麻豆果冻传媒2021精品传媒一区下载 | 国产av无码专区亚洲awww | 精品国精品国产自在久国产87 | 精品成人av一区二区三区 | 成人三级无码视频在线观看 | 人人澡人人透人人爽 | 久久久久久久女国产乱让韩 | 国产特级毛片aaaaaa高潮流水 | 亚洲爆乳无码专区 | 久久久久av无码免费网 | 国产精品va在线观看无码 | 欧洲极品少妇 | 激情爆乳一区二区三区 | 无码毛片视频一区二区本码 | 人妻aⅴ无码一区二区三区 | 亚洲一区二区三区 | 亚洲成av人片在线观看无码不卡 | 亚洲人成网站色7799 | 国产精品无码久久av | 男人扒开女人内裤强吻桶进去 | 成人免费无码大片a毛片 | 捆绑白丝粉色jk震动捧喷白浆 | 人人妻人人澡人人爽人人精品 | 婷婷丁香六月激情综合啪 | 国产国产精品人在线视 | 草草网站影院白丝内射 | 国产av一区二区精品久久凹凸 | 亚洲色无码一区二区三区 | 国产人妻精品一区二区三区 | 少妇的肉体aa片免费 | 亚洲色欲色欲欲www在线 | 久久综合九色综合97网 | 国产va免费精品观看 | 成熟妇人a片免费看网站 | 精品欧美一区二区三区久久久 | 天天摸天天碰天天添 | 黑森林福利视频导航 | 久久国产自偷自偷免费一区调 | 夜夜躁日日躁狠狠久久av | 无码乱肉视频免费大全合集 | 沈阳熟女露脸对白视频 | 中文字幕人妻无码一区二区三区 | 国产综合久久久久鬼色 | 97资源共享在线视频 | 日本高清一区免费中文视频 | 在线观看免费人成视频 | 亚洲欧美综合区丁香五月小说 | 久久99国产综合精品 | 国产性生交xxxxx无码 | 日本高清一区免费中文视频 | 国产麻豆精品一区二区三区v视界 | 日韩av无码一区二区三区不卡 | yw尤物av无码国产在线观看 | 熟女体下毛毛黑森林 | 一二三四社区在线中文视频 | 男女超爽视频免费播放 | 午夜时刻免费入口 | 亚洲理论电影在线观看 | 超碰97人人做人人爱少妇 | 国产9 9在线 | 中文 | 国产精品香蕉在线观看 | 人妻少妇精品无码专区二区 | 麻豆果冻传媒2021精品传媒一区下载 | 131美女爱做视频 | 国产精品美女久久久 | 性啪啪chinese东北女人 | 成人免费视频视频在线观看 免费 | 青春草在线视频免费观看 | 亚洲成a人片在线观看无码3d | 婷婷五月综合激情中文字幕 | 综合激情五月综合激情五月激情1 | 无码av中文字幕免费放 | 在线看片无码永久免费视频 | 成人毛片一区二区 | 国产成人久久精品流白浆 | 久久久精品国产sm最大网站 | 日日麻批免费40分钟无码 | 性欧美videos高清精品 | 黑森林福利视频导航 | 成人免费视频在线观看 | 午夜成人1000部免费视频 | 国内综合精品午夜久久资源 | 一本大道伊人av久久综合 | 久久无码专区国产精品s | 99久久婷婷国产综合精品青草免费 | 正在播放老肥熟妇露脸 | 午夜无码区在线观看 | 亚洲成a人片在线观看日本 | 色综合久久久无码网中文 | 波多野结衣高清一区二区三区 | 欧美国产亚洲日韩在线二区 | 日日摸日日碰夜夜爽av | 婷婷五月综合缴情在线视频 | 老司机亚洲精品影院无码 | 日韩无套无码精品 | 久久综合久久自在自线精品自 | 亚洲国产成人av在线观看 | 极品尤物被啪到呻吟喷水 | 中文字幕+乱码+中文字幕一区 | 未满成年国产在线观看 | 日本精品高清一区二区 | 精品水蜜桃久久久久久久 | 中文字幕+乱码+中文字幕一区 | 国产麻豆精品一区二区三区v视界 | 人人妻人人澡人人爽欧美一区 | 欧美35页视频在线观看 | 在线亚洲高清揄拍自拍一品区 | 天下第一社区视频www日本 | 日日天干夜夜狠狠爱 | 国产美女极度色诱视频www | 久久国产精品萌白酱免费 | 欧美刺激性大交 | 久久久久久亚洲精品a片成人 | 1000部夫妻午夜免费 | 波多野结衣一区二区三区av免费 | 欧美变态另类xxxx | 国产精品久久国产三级国 | 国产一精品一av一免费 | 荫蒂添的好舒服视频囗交 | 精品无人国产偷自产在线 | 欧美性猛交内射兽交老熟妇 | 国产三级精品三级男人的天堂 | 亚洲国产精品无码一区二区三区 | 亚洲国产高清在线观看视频 | 蜜臀av在线播放 久久综合激激的五月天 | 欧美激情综合亚洲一二区 | 国产午夜视频在线观看 | 又色又爽又黄的美女裸体网站 | 国产精品18久久久久久麻辣 | 亚洲成av人影院在线观看 | 亚洲欧美国产精品专区久久 | 中文字幕人妻无码一区二区三区 | 久久99精品久久久久久 | 亚洲毛片av日韩av无码 | 亚洲精品一区二区三区大桥未久 | 综合激情五月综合激情五月激情1 | 好爽又高潮了毛片免费下载 | 亚洲色大成网站www | 一本无码人妻在中文字幕免费 | 人妻互换免费中文字幕 | 日本xxxx色视频在线观看免费 | 国产精品丝袜黑色高跟鞋 | 亚洲成av人片在线观看无码不卡 | 成人综合网亚洲伊人 | 亚洲成av人综合在线观看 | 日产精品99久久久久久 | 精品久久8x国产免费观看 | 激情人妻另类人妻伦 | 一本大道久久东京热无码av | 亚洲日本va午夜在线电影 | 国内精品人妻无码久久久影院 | 国产一区二区三区日韩精品 | 欧美 亚洲 国产 另类 | 久久久久国色av免费观看性色 | 久久午夜无码鲁丝片 | 99er热精品视频 | 亚洲国产欧美在线成人 | 综合网日日天干夜夜久久 | 一区二区三区高清视频一 | 国产性猛交╳xxx乱大交 国产精品久久久久久无码 欧洲欧美人成视频在线 | 国内丰满熟女出轨videos | 国产性生交xxxxx无码 | 国产农村妇女aaaaa视频 撕开奶罩揉吮奶头视频 | 欧美xxxx黑人又粗又长 | 色婷婷综合中文久久一本 | 成人欧美一区二区三区 | 在线а√天堂中文官网 | 在线观看国产午夜福利片 | 无码午夜成人1000部免费视频 | 亚洲色在线无码国产精品不卡 | 国产 精品 自在自线 | 波多野结衣一区二区三区av免费 | 狠狠综合久久久久综合网 | 亚洲精品国偷拍自产在线观看蜜桃 | 88国产精品欧美一区二区三区 | 一本一道久久综合久久 | 国语自产偷拍精品视频偷 | 曰本女人与公拘交酡免费视频 | 桃花色综合影院 | 欧美丰满老熟妇xxxxx性 | 国产明星裸体无码xxxx视频 | aa片在线观看视频在线播放 | 日韩亚洲欧美精品综合 | 男人的天堂av网站 | 高清不卡一区二区三区 | 一区二区三区乱码在线 | 欧洲 | 无码人中文字幕 | 日本爽爽爽爽爽爽在线观看免 | 蜜臀av无码人妻精品 | 永久黄网站色视频免费直播 | 国产亚洲精品久久久久久 | 夜夜高潮次次欢爽av女 | 午夜精品一区二区三区在线观看 | 激情内射亚州一区二区三区爱妻 | 性色欲网站人妻丰满中文久久不卡 | 久久综合给久久狠狠97色 | 国产无遮挡吃胸膜奶免费看 | 精品国产一区二区三区av 性色 | 色综合久久久无码网中文 | 人妻少妇精品无码专区动漫 | 秋霞成人午夜鲁丝一区二区三区 | 自拍偷自拍亚洲精品被多人伦好爽 | 国产极品视觉盛宴 | 人妻少妇精品视频专区 | 在线亚洲高清揄拍自拍一品区 | 国产凸凹视频一区二区 | 久久精品国产日本波多野结衣 | 亚洲成av人影院在线观看 | 女人被男人躁得好爽免费视频 | 亚洲精品一区三区三区在线观看 | 中文精品久久久久人妻不卡 | 高清国产亚洲精品自在久久 | 国产人妻精品一区二区三区 | 精品一区二区不卡无码av | 2020久久香蕉国产线看观看 | 日本欧美一区二区三区乱码 | 久久国产精品_国产精品 | 成人av无码一区二区三区 | 亚洲欧洲无卡二区视頻 | 99国产精品白浆在线观看免费 | 欧美乱妇无乱码大黄a片 | 日本大乳高潮视频在线观看 | 亚洲成a人片在线观看日本 | 国产97色在线 | 免 | 强奷人妻日本中文字幕 | 国产成人无码一二三区视频 | 国产人妻大战黑人第1集 | 亚洲国产欧美日韩精品一区二区三区 | 99久久久无码国产精品免费 | 男人和女人高潮免费网站 | 欧美自拍另类欧美综合图片区 | 377p欧洲日本亚洲大胆 | 成人女人看片免费视频放人 | 日韩 欧美 动漫 国产 制服 | 成人欧美一区二区三区 | 亚洲成熟女人毛毛耸耸多 | 人妻天天爽夜夜爽一区二区 | 蜜臀av在线观看 在线欧美精品一区二区三区 | 乱人伦人妻中文字幕无码久久网 | 久久久久久久女国产乱让韩 | 久久aⅴ免费观看 | 亚洲精品无码人妻无码 | 波多野42部无码喷潮在线 | 夜精品a片一区二区三区无码白浆 | 亚洲一区二区三区在线观看网站 | 国产综合色产在线精品 | 狠狠色噜噜狠狠狠7777奇米 | 久久久久亚洲精品男人的天堂 | 国产97人人超碰caoprom | 久9re热视频这里只有精品 | 免费看男女做好爽好硬视频 | 少妇性荡欲午夜性开放视频剧场 | 永久免费精品精品永久-夜色 | 国内综合精品午夜久久资源 | 婷婷五月综合缴情在线视频 | 强辱丰满人妻hd中文字幕 | 久久久久成人精品免费播放动漫 | 精品国偷自产在线视频 | 亚洲色大成网站www | 人人妻人人澡人人爽欧美一区九九 | 亚洲精品一区三区三区在线观看 | 性生交片免费无码看人 | 久久人人97超碰a片精品 | 激情内射日本一区二区三区 | 国产人妻久久精品二区三区老狼 | 精品无码国产自产拍在线观看蜜 | 中文字幕av日韩精品一区二区 | 久久久久久久人妻无码中文字幕爆 | av人摸人人人澡人人超碰下载 | 国内精品久久久久久中文字幕 | 国产人妻人伦精品1国产丝袜 | 成人性做爰aaa片免费看不忠 | 无码午夜成人1000部免费视频 | 18无码粉嫩小泬无套在线观看 | 秋霞成人午夜鲁丝一区二区三区 | 香蕉久久久久久av成人 | 国产精品久久久午夜夜伦鲁鲁 | 亚洲欧洲日本综合aⅴ在线 | 国产精品内射视频免费 | 色婷婷欧美在线播放内射 | 在线观看欧美一区二区三区 | 婷婷色婷婷开心五月四房播播 | 亚洲a无码综合a国产av中文 | 色婷婷久久一区二区三区麻豆 | 国产成人精品三级麻豆 | 成人免费视频一区二区 | 永久黄网站色视频免费直播 | 欧美成人家庭影院 | 日韩成人一区二区三区在线观看 | 人妻互换免费中文字幕 | 美女毛片一区二区三区四区 | 亚洲精品www久久久 | 国产舌乚八伦偷品w中 | 久久久久久久人妻无码中文字幕爆 | 夜精品a片一区二区三区无码白浆 | 色欲久久久天天天综合网精品 | 人妻无码αv中文字幕久久琪琪布 | 任你躁国产自任一区二区三区 | www一区二区www免费 | 国产午夜亚洲精品不卡下载 | 久精品国产欧美亚洲色aⅴ大片 | 无码人妻黑人中文字幕 | 亚拍精品一区二区三区探花 | 国产精品久久久一区二区三区 | 亚洲熟熟妇xxxx | 我要看www免费看插插视频 | 无码人妻精品一区二区三区不卡 | 亚洲色www成人永久网址 | 成人精品视频一区二区三区尤物 | 欧美人与牲动交xxxx | 人妻体内射精一区二区三四 | 曰韩无码二三区中文字幕 | 午夜无码人妻av大片色欲 | 久久久无码中文字幕久... | 一本久久a久久精品vr综合 | 亚洲中文字幕在线观看 | 国产欧美精品一区二区三区 | 久久午夜无码鲁丝片 | 十八禁真人啪啪免费网站 | 永久免费精品精品永久-夜色 | 成人av无码一区二区三区 | 人妻夜夜爽天天爽三区 | 97精品人妻一区二区三区香蕉 | 亚洲国产精品无码久久久久高潮 | 国产亚洲精品久久久久久久久动漫 | 亚洲熟女一区二区三区 | 久久久久久九九精品久 | 亚洲成色在线综合网站 | 国产精品人人爽人人做我的可爱 | 久久天天躁夜夜躁狠狠 | 国产av无码专区亚洲awww | 无码人妻精品一区二区三区不卡 | 无人区乱码一区二区三区 | 亚洲中文字幕乱码av波多ji | 中文毛片无遮挡高清免费 | 沈阳熟女露脸对白视频 | 国产成人精品一区二区在线小狼 | 强辱丰满人妻hd中文字幕 | 日本丰满护士爆乳xxxx | 无码人妻丰满熟妇区五十路百度 | 国产精品高潮呻吟av久久 | 亚洲国精产品一二二线 | aⅴ在线视频男人的天堂 | 中文无码成人免费视频在线观看 | 动漫av一区二区在线观看 | 日产国产精品亚洲系列 | 无码人妻精品一区二区三区不卡 | 久久久久se色偷偷亚洲精品av | 中文字幕+乱码+中文字幕一区 | 欧美熟妇另类久久久久久多毛 | 色噜噜亚洲男人的天堂 | 亚洲精品久久久久avwww潮水 | 2019午夜福利不卡片在线 | 色一情一乱一伦 | 午夜男女很黄的视频 | 免费无码一区二区三区蜜桃大 | 狠狠色噜噜狠狠狠7777奇米 | 无遮挡国产高潮视频免费观看 | 久久久久久亚洲精品a片成人 | 久久久久久久人妻无码中文字幕爆 | 亚洲一区二区三区播放 | 最新版天堂资源中文官网 | 桃花色综合影院 | 国产三级精品三级男人的天堂 | 精品一区二区三区波多野结衣 | 亚洲经典千人经典日产 | 国产精品福利视频导航 | 领导边摸边吃奶边做爽在线观看 | 色婷婷久久一区二区三区麻豆 | 国产精品香蕉在线观看 | 青青青爽视频在线观看 | 亚洲人成网站色7799 | 精品国产成人一区二区三区 | 国产国语老龄妇女a片 | 欧美丰满熟妇xxxx性ppx人交 | 久久视频在线观看精品 | 欧美国产亚洲日韩在线二区 | 日本一区二区三区免费高清 | 中文字幕无码免费久久99 | 老熟妇乱子伦牲交视频 | 99riav国产精品视频 | 又大又紧又粉嫩18p少妇 | 嫩b人妻精品一区二区三区 | 色一情一乱一伦一区二区三欧美 |