Limit to API download?

Hey All,

I am trying to download the POI data for all of the US using the API. I have set “maxresults” to 1e6 but I am only getting ~5k or so results scattered across the US. Is there a limit on how much data I can pull from the API? Another issue?

here is my call:
url = ‘


No, there’s not an intentional limit there so something is timing out, or there is a problem with the API server that’s responding. We’ve had some server performance issues for the last few days, with the servers struggling to cope with the amount of queries.

In my test using curl (key omitted from the example) I get a 93MB file:
curl “” --output c:\temp\ocm.json

I’ll investigate to make sure all the api servers are currently caching ok.

Interesting. If I use the curl command I also get the full download. I was using the “requests” library in Python previously. If there are any experts out there that could tell me why my approach using “requests” is not working I would really appreciate it!

maxresults = 1000000
api_key = '&{my-key}'
url = f'{maxresults}&compact=false&verbose=False'

# pull data from API
stations_json = requests.get(url + api_key)

Sorry, you can disregard this “issue”. After I was downloading these data I was filtering on the “NumberOfPoints” attribute which removed all records where this value was missing. I hadn’t realized that it was missing for so many records.

Which brings me to another question: Is the NumberOfPoints attribute the best attribute for knowing the number of chargers at a station? It seems like this is missing for a lot of the chargers that are available in the AFDC dataset despite being present in the AFDC data.

HI, yes you will find various field are not well populated. Our POIs are effectively summary information about groups of chargers at location. If NumberOfPoints (which, unintuitively has come to mean the number of charging bays available) is not populated you can assume 1 (or possible more).

Really though the quality of that information varies and is often implied by the connection info instead, but it’s not great. It stems from us originally only have a few very differently shaped data sources to pull from which influenced the schema, but stricter revisions to the model for edits etc are long overdue.