Building a Custom Infinite Scroller

With Node.js, Mongoose, and Express Published by Nathan A. Wilcox on August 9, 2020 and updated on August 9, 2020

Time to read: 15 minutes. Level: Advanced

We are going to build a simple news feed that mimics a typical social media infinite scrolling feed. This is the effect you see when scrolling through a list of items, and when you approach the end of the list, the application automatically requests more. You can see my sample project here

To achieve this, we need to solve 3 problems:

  1. How can we design our server so it can send data to the client based on what was already delivered?
  2. How can the browser let the server know what data it needs, and what data it already has?
  3. How will the browser tell the server it needs more?

Problem 1: Handling requests for data, and more data

To solve the first, we are going to create an endpoint in our application that will return JSON back to the browser. But to make sure the browser only gets what it needs, the server will expect the browser to relate 2 things: where to start reading database records, and how many records to read. This requirement will take form in 3 parameters. Page load time, request limit, and a skip value. With the page load time, we have a starting point. Our query will only look for posts created before the page was loaded. With the request limit, we can tell our query how many results the browser wants. And finally, with a skip parameter, we can tell our query how many request have already been sent to the browser.  The following code will handle these requirements.


app.get('/list', async function(req, res) {

  //params sent by client
  let _pageLoad = new Date(req.query.pageLoad)
  let _skip = parseInt(req.query.skip)
  let _pageLimit = parseInt(req.query.limit)
        
  let _posts = await Post.find({
      posted: { $lte: _pageLoad }
  })
  .limit(_pageLimit)
  .skip(_skip)
  .sort({ posted: -1});

  res.json( {
      posts: _posts
  });
});

We are using the pageLoad param in our where clause, the skip param to avoid sending the same data back to the browser, and the limit param to return only the amount of posts the browser wants. As long as the browser is honest with its needs, or a malicious user isn't forging requests, it should always get the right data and avoid requesting what is already has. With our endpoint ready, let's move to working on the client side.


Problem 2: The Browser being able to request the right data

Now we need a method on the client side that can fetch data based on the parameters we defined on the server. For this, we are going to write a method that will send a request to our endpoint along with some variables we prepare. It will accept a startdate, a limit, and a skip value. When it finishes, it will append each result to a UL tag on our page using some jQuery.


function getMorePosts(start, limit, skip) {

  //check if there are any requests in progress before requesting more
  if(!_requestInProgress) {
    _requestInProgress = true;

    //send request to server with callback to append results to our list.
    sendRequestToServer("/list", { 
      pageLoad: start, 
      limit: limit, 
      skip: skip }, 
      function(res) {  
        //update the list with each post and mark the inprogress flag to false.
        res.posts.forEach(post => appendPost(post.author, post.contents));
        _requestInProgress = false;
      }
    );
  }
} 

//uses jQuery to send GET request to our '/list' url
function sendRequestToServer(url, options, oncomplete) {
  $.ajax({
    dataType: "json",
    url: url,
    data: options,
    success: oncomplete
  });
}

//Accepts 2 string arguments, and appends an HTML comment snippet to our list.
function appendPost(author, contents) {
  $("#postList").append(`<li class="media">
    <div class="media-body">
      <h5 class="mt-0 mb-1 font-weight-bold">${author}</h5>
      ${contents} 
      <hr>
    </div>
  </li>`);
}

When we invoke getMorePosts(), we include our start datetime stamp, how many posts we want returned, and how many rows to skip. When we get our response, we loop through the JSON result, calling appendPost() for each entry. In this method, we simply append some HTML to an UL#postList tag.

To put this code to work, when the page loads we create the required parameters with default values, and inside the ready event, we are going to call this method for the first time.


//create our parameters when the page first loads.
let _pageLoad = new Date();     //Datetime of page load.
let _limit = 10;                //Limit counter. Limits results from server
let _requests = 0;              //Request Counter. Counts requests sent
let _requestInProgress = false; //Boolean flag. Blocks additional requests

$(document).ready(function() { 
  getMorePosts(_pageLoad, _limit, _limit * _requests++);
});

When passing the skip parameter, we calculate how many requests we have made multiplied by the limit counter. This tells the server how many results to skip before sending more rows. For the first request, this will equate to 0, but the next request will be 1*10=10, and then 2*10=20, and so on.

Now when the page loads, we get our initial list of posts. The only thing left is to load more when we scroll to the bottom of the page.


Problem 3: Having the browser request more data when you reach the end of the page

We are going to write some logic that will create a new request when we get within 100 pixels of the bottom of the page. Because this event will fire continuously, we also need a safe guard to prevent spamming the server with requests. As you had seen in our previous logic, we have a boolean flag _requestInProgress that we set to true when the request starts and revert back to false once the request is complete. This allows us to make sure we are not waiting for a request before we send another.


$(window).scroll(function() {
  //check if we are within 100 pixels.
  if($(window).scrollTop() + $(window).height() > $(document).height() - 100) {
    getMorePosts(_pageLoad, _limit, _limit * _requests++);  
  }
});

When we scroll towards the bottom of the page, we automatically invoke the getMorePosts() method. As long as a request isn't currently in progress, the server will be notified that more posts are ready to display, and will send the next load of 10. With all 3 problems solved, we now have a functioning infinite scrolling news feed. You can see this logic in action here.


Bonus Exercise

In the sample I have linked, you will see that it not only loads more requests when the page is scrolled, but it also routinely polls the server. Try to think how you could achieve a news feed that not only loads more when scrolling, but also inserts new posts as they are sent to the server. How would we have to modify the code we already created. For a working example of this, go here


Summary

To recap what we have done, we created a REST endpoint on our server that accepts a start datetime stamp, a result limit, and a skip parameter, and returns JSON that represents our Post data that fits that description. We then take that data and populate our list on the client. When the client scrolls to a certain point, another request is generated and the process repeats. We now have a scrolling feed that will never run dry as long as you have the content to serve.


Comments