GreenWay Polska as a new Charging Operator
"Title": "GreenWay Polska",
"ContactEmail": "[email protected]",
(not sure about ID though, 3451, I’m just suggesting first free at the time of checking)
Huge part of ID: 88 (GreenWay in Slovakia) should be actually re-assigned to GreenWay Polska
is there any better way than just doing it manually for all affected stations?
By the way, I’ve reached out to them and asked if they would agree to provide open data so it could be used by projects such as OCM.
That operator has been created. ID is indeed, 3451.
Yes, sometimes bulk updates could be useful, but how would you go about designing the interface to do it safely? For instance, I’ve recently been manually updating tariffs for a fairly major charging operator in the UK. Select all locations run by a given operator, in a given country… and only change one or two fields. Would it be restricted to country editors?
I know to little about current ways of doing things in OCM, however I have my own ideas on how to handle data updates, high availability, decentralization etc. but since I’m a newbie around I don’t want to elaborate about that yet, at it might sound as advertising own weird agenda with buzzwords. I’d rather come back with proof of concept at least, when I’ll have enough bandwidth to turn ideas into code (and even before doing that of course I need to learn more about OCM internals / processes).
For now it doesn’t seem easy to handle such cases without ultimately trusting editors even before they submit data. They can always introduce errors (mistake or malicious). Is there versioning for edits? i.e. to easily revert such errors?
It’s an example of same set of data but done in a more atomic way
so it can actually be reviewed by just looking at differences between revisions.
I’ve included few change sets (few days worth of ocm-data contributions in each) to provide a better picture.
Currently when edits are submitted by normal users we capture the JSON version of the POI before and after the change. Editors can then approve the change or reject it, if nobody does anything then the change is auto approved after a few days.
After a while we archive the edit queue history for a POI and may eventually discard old edits.
If we were going to implement a new process for differential edits I’d want us to use something that’s well established, so we don’t have to build a bunch of custom stuff. Maybe git, maybe dat (https://dat.foundation/) maybe some other open data versioning tool.
The main issue with bulk updates is that the submitter may or may not have bothered to de-duplicate their data or compare it to what we already have. There is also data model transformation necessary in most case to translate connector types/field names etc. We have a slight gap in our data model when it comes to grouping EVSE that it more obvious when compared to OCPI and it can be an issue or not depending on what we’re importing.
The vast majority of data we see elsewhere in other open data sources (gov repositories etc) has large segments of low quality data (POIs in the wrong country, invalid latitude/longitude etc) and ideally if we know someone if trying to make a batch update then we first would feed it though some filters. Currently we do this with our own imports, but I don’t plan to continue these as they are not really sustainable (they are moving targets).
Operators etc could use our normal API to submit one change at a time, but they would inevitably try to replace all the fields rather than doing a differential update of just one or two fields, overwriting anything real users may have contributed (e.g. corrections). Currently if a user edits something that was previously from an import we take control of that POI from then on and don’t attempt to import anything over the top of it (it’s been this way since about 2013).
It’s a fairly classic data management problem of which data source is the primary and which is just a copy. Ultimately we would rather be the primary, or lock records so that they can only be updated by the source (which was considered in the past but means you have to keep mistakes and can’t fix them directly).
I’m very open to suggestions and even the development of new tools/systems, but if it requires me to do anything then it’s probably not going to happen unfortunately.
Obviously, creating MVP is easy, further development and maintenance is usually a pain.
My idea is to create a Hive dApp (I believe “dat” might have similar ideas behind their project). OCM is IMHO a perfect use case Hive. And Hive is a perfect platform for projects like OCM. Especially with upcoming features such as modular Hivemind.
No worries, I know the pain. No bandwidth to do fancy stuff. In worst case it could be just an alternative way for getting OCM data. However I hope at some point it might become the ultimate storage for such data and solve a lot of issues such as scalability. It could make possible to use reputation based views, for example I trust @Simon_Hewison more with his manual contributions than some generic gov open data repository, but at the same time I prefer to use such repository over @Anon420 contributions. Things might be different for different countries / regions, and preferences of users / apps.
Are you familiar with OCN (Open Charging Network), they have a blockchain/ether based OCPI distribution network. We did join it and set up a node, but, it was complex and nobody was really using it.
Yes there may be a way to achieve open data distribution and maintenance of data this way. You will know infinitely more about that than me however. Ultimately we only care about the data set being constantly maintained and available, not really how it works.
Everyone has their own bias towards how data and apps should be architected, mine is pretty conventional databases and APIs but as I’ve said, I’m open to different approaches especially if they are self-reliant and don’t hinge on the contributions/skills of one person.
I’m familiar with similar solutions to OCN and other attempts that were made based on Ethereum, etc., which actually was a reason for me to start thinking that if there’s such need, Hive would be an optimal platform (way faster, fee-less, designed for this kind of dapps) And yes, I’m aware that I’m heavily biased
And I’d love to see that you can get all that on Hive without extra overhead or complexity. We are not there yet, but I believe we are on a good path. We will see in time, I’ll keep you posted.