Immediately after switching the page, it will work with CSR.
Please reload your browser to see how it works.

Source:https://github.com/SoraKumo001/next-streaming

⬅️ Niantic announces “Large Geospatial Model” trained on Pokémon Go player data
reissbaker 6 hoursReload
I'm confused by both this blog post, and the reception on HN. They... didn't actually train the model. This is an announcement of a plan! They don't actually know if it'll even work. They announced that they "trained over 50 million neural networks," but not that they've trained this neural network: the other networks appear to just have been things they were doing anyway (i.e. the "Virtual Positioning Systems"). They tout huge parameter counts ("over 150 trillion"), but that appears to be the sum of the parameters of the 50 million models they've previously trained, which implies each model had an average of... 3MM parameters. Not exactly groundbreaking scale. You could train one a single consumer GPU.

This is a vision document, presumably intended to position Niantic as an AI company (and thus worthy of being showered with funding), instead of a mobile gaming company, mainly on the merit of the data they've collected rather than their prowess at training large models.


relyks 13 hoursReload
This is pretty cool, but I feel as a pokehunter (Pokemon Go player), I have been tricked into working to contribute training data so that they can profit off my labor. How? They consistently incentivize you to scan pokestops (physical locations) through "research tasks" and give you some useful items as rewards. The effort is usually much more significant than what you get in return, so I have stopped doing it. It's not very convenient to take a video around the object or location in question. If they release the model and weights, though, I will feel I contributed to the greater good.

CaptainFever 13 hoursReload
This title is editorialized. The real title is: "Building a Large Geospatial Model to Achieve Spatial Intelligence"

> Otherwise please use the original title, unless it is misleading or linkbait; don't editorialize.

My personal layman's opinion:

I'm mostly surprised that they were able to do this. When I played Pokémon GO a few years back, the AR was so slow that I rarely used it. Apparently it's so popular and common, it can be used to train an LGM?

I also feel like this is a win-win-win situation here, economically. Players get a free(mium) game, Niantic gets a profit, the rest of the world gets a cool new technology that is able to turn "AR glasses location markers" into reality. That's awesome.


KaiserPro 26 minutesReload
So what they are doing is not different from previous "VPS" systems, its how they are doing it.

What is a "VPS" At its heart, Visual Positioning Systems are actually pretty simple. You build a 3d point cloud of a place, with each point being a repeatable unique feature that can be extracted from an image (see https://blog.ekbana.com/extracting-invariant-features-from-i...) Basically a "finger print"/landmark of a thing in real life that can be extracted from an image reliably.

To make that work, you need to generate a large map of these points: https://www.researchgate.net/figure/Sparse-point-cloud-Figur... Which basically involves taking lots of pictures with GPS tags on where they are. Google has the advantage of street view, Niantic has it's game. Others had to pay a bunch of people to go round a city with cameras.

Once you build that pointcloud (which isn't actually that easy, you can't do it all at once, and aligning point clouds is hard.) you can then use trigonometry to work out where a picture is. This is called "re-localization" which is a stupid name. The hard part is the data management. There are billions of points in the world, partitioning the database so that you can quickly locate a picture is the hard part.

Hence this approach, which is basically "train a model to do it for us" You still get a "VPS", you still need all that data, but they hope that a model will able to optimize for speed.

is it private?

No, the original system isn't private. If they've done their job properly, then nothing identifiable will be in the "map" as thats extra data you dont need. What they do with the raw photos, and the metadata that they contain is another matter.


dankwizard 7 hoursReload
We do this at MyFitnessPal.

When users scan their barcode, the preview window is zoomed in so users think its mostly barcode. We actually get quite a bit more background noise typically of a fridge, supermarket aisle, pantry etc. but it is sent across to us, stored, and trained on.

Within the next year we will have a pretty good idea of the average pantry, fridge, supermarket aisle. Who knows what is next