-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running model on single Images #23
Comments
Hi @tiusty, If your single .tif image is small enough to run in a reasonable amount of time on the front end, then you can define a polygon layer that simply contains the extent of the tif and run "Download" on it through the web interface. If it is much larger, or you want to interact with your model programmatically, then you can save a checkpoint, and use that ModelSession object as a normal class in a jupyter notebook. Happy to explain more in-depth if these don't make sense! Best, |
Thanks for the response! I will give that a try tomorrow and let you know how it goes. |
Hi @calebrob6, I am taking approach 1 for now. Lets say I have the following png file. I converted it to a tif file on Ubuntu by running Then I update the dataset of the HCMC dataset with: Now the image shown by the localhost still appears like HCMC But when I shift click to apply segmentation it seems to load the new image underneath when actual computing the segmentation but it fails. The first issue was when the .tif file was loaded, the f.crs was not populated. To see if that was the only issue, I hardcoded to the value from the HCMC tif file. Then after that I tried again but now run into the next issue: Therefore maybe the format of my tif file is not proper since I just converted it from a png directly? Wonder if you have any thoughts. |
Hi @tiusty, I am also just figuring out how to use this tool, but figured I would chime in with what I have found out. I am fairly sure by using 'convert' you are not providing the crs or any location information. The tif file should actually be a geotiff. Take a look at osgeo gdal translate, you'll need to provide the output bounds and the crs. https://gdal.org/python/osgeo.gdal-module.html#Translate You can check if your tiff is referenced correctly by importing it into QGIS and using it as a layer. The image should line up with another map. Hope that helps. |
Hi @richard-mackie, Thanks for the info. This may not exist but wondering if @calebrob6 has a comment on this: Not sure if this functionality exists but the reason why just being able to pass a single png image to the frontend without needing to convert to geotiff and also loading a basemap with the associated layers is because my use case involves integrating with Airsim. Therefore generating a basemap with the associated layers and also producing a geotiff is not super relevant. My understanding is that the geotiff is needed to be able to produce predictions for the inference window so you are not producing a prediction of the whole image. @calebrob6 mentioned a way to define a single image as a polygon layer and then perhaps if then image can be interacted, i.e set sample points to retrain and then do a prediction on the extent of the image, that would be great. Otherwise this would probably be a feature request which would probably be outside of the use case of the Microsoft Land Cover. |
Hi tiusty,
Sorry for the late reply, I've been on vacation this week, and will be much
more responsive next week.
This is a good point you've brought up. Perhaps I can write a script that
"inputs" a PNG into the tool. I will get back to you on this.
Best,
Caleb
…On Fri, Mar 26, 2021, 10:56 AM tiusty ***@***.***> wrote:
Hi @richard-mackie <https://github.com/richard-mackie>,
Thanks for the info.
I downloaded QGIS and used the georeferencer to convert to the png into to
a geotiff. I was able to get segmentation results from the image after that.
This may not exist but wondering if @calebrob6
<https://github.com/calebrob6> has a comment on this:
Not sure if this functionality exists but the reason why just being able
to pass a single png image to the frontend without needing to convert to
geotiff and also loading a basemap with the associated layers is because my
use case involves integrating with Airsim
<https://github.com/microsoft/AirSim>.
Therefore generating a basemap with the associated layers and also
producing a geotiff is not super relevant. My understanding is that the
geotiff is needed to be able to produce predictions for the inference
window so you are not producing a prediction of the whole image.
@calebrob6 <https://github.com/calebrob6> mentioned a way to define a
single image as a polygon layer and then perhaps if then image can be
interacted, i.e set sample points to retrain and then do a prediction on
the extent of the image, that would be great. Otherwise this would probably
be a feature request which would probably be outside of the use case of the
Microsoft Land Cover.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#23 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAIJUTVT3L5LL3R2TJE3273TFSVDJANCNFSM4ZPC66LQ>
.
|
Hi @calebrob6, No worries at all. Hope you had a good vacation. That would be super useful if possible. Thanks! |
Hi,
I ran the web server locally and it is super useful.
One thing I have tried but unable to do so far is to:
Thanks!
The text was updated successfully, but these errors were encountered: