#large-datasets

[ follow ]
fromLogRocket Blog
1 day ago

How to speed up long lists with TanStack Virtual - LogRocket Blog

When you are building a social feed, data grid, or chat UI, everything feels fine with 10 mock items. Then you connect a real API, render 50,000 rows with myList.map(...), and the browser locks up. The core problem is simple: you are asking the DOM to do too much work. Virtualization solves this by rendering only what the user can actually see. Instead of mounting 50,000 nodes, you render the 15-20 items that are visible in the viewport, plus a small buffer.
React
Data science
fromHackernoon
1 year ago

Breaking the iid Barrier: How Stochastic Gradients Are Going Spatial | HackerNoon

Stochastic gradient methods offer efficient solutions for large-scale datasets in Bayesian inference.
[ Load more ]