Working with large datasets and the rate limit?


#1

Has anyone come up with a good way to work with larger datasets and the 5000 node limit with the GraphQL API? I have a cache that I can work against for data that has been retrieved, the catch is that the initial retrieval of the data I want (open and merged PRs) causes me to bump up against the limit.

Currently I’m using a query like:

{
  repository(owner: "Me", name: "myrepo") {
    pullRequests(first: 100, states: [OPEN, MERGED] , orderBy: {direction: ASC, field: CREATED_AT}) {
      edges {
        node {
          title
          author {
            login
          }
        }
        cursor
      }
      pageInfo {
        endCursor
        hasNextPage
      }
    }
  }
}

Is there a more efficient way to do this?

TIA