More

Importing csv file in QGIS including names and description

Importing csv file in QGIS including names and description


I'm just getting to learn QGIS. I want to make a map (Google maps) where I can map all our customers from our database. Including names and even better: description, address, telephone, etc. All this data is available in our own database.

So far I managed to import a csv list using Mmqgis. Then added another layer with Openlayers All works fine.

Also tried exporting the .shp as .kml, uploaded it to Dropbox and opened the link in a newly created personal Google Map. Works fine.

But the result is a map with locations. In which step can I include company names, not having to do this by hand… ? I tried importing csv files via mmqgis with additional columns for names, etc. but I can't see where this information is stored in the QGIS project/I don't know if it is stored.

Any help is welcome!

Edit: I found out that I can just import an xlsx file and select the correct columns in the google map creator… Edit2: Now I see only Google accounts can edit the map, it would make more sense to use OSM or whatsoever I guess. Any thoughts on this? I'm diving into OSM now, but importing xlsx doesnt seem to be an option as easy as in Google Maps. So first question remains: how to include names/description in QGIS/OSM Edit3: I'm looking for a solution csv->map including names/descriptions, preferably not Google Maps.


Magento 2 import new and update current products with a CSV file and cron job

I previously used Danslo ApiImport in several of my Magento 1.9.X projects in combination with a cron job to import new products and update existing products with CSV files automatically/periodically.

I'm wondering if there is a similar free extension/module for Magento 2.X that offers the same basic import/update functionality of default Magento 2 product attributes (preferably with basic documentation on how to use it)?

Or if someone could help with an example script on how to load data from a CSV file and update product qty and price or a script to create new simple products ?


Use layers in maps and scenes

You build a map or scene by adding data layers to them and configuring how the layers look and behave in the map or scene. You can add layers you published, layers shared to you by others, and layers from other providers—such as ArcGIS Living Atlas of the World —to the maps and scenes. You can use the options on the Overview tab in a layer's item page to open it in Map Viewer , Map Viewer Classic , or Scene Viewer or you can start in Map Viewer , Map Viewer Classic , or Scene Viewer and add layers there. See Get started with maps and Get started with scenes for overviews of the process to create the maps and scenes you and others can use to interact with your layers.

Feature layers can be used in analysis tools—in Map Viewer Classic and ArcGIS Pro —and custom apps to answer spatial questions, discover patterns, and identify trends.


Added to that, Microsoft does not have much incentive to ensure interoperability between their version of CSV and Drupal's version, and so it is not surprising that they do not offer any (direct) way to encapsulate the contents of cells with single quotes within double quotes, so you've got to do something a bit more complicated if you want Excel to output Drupal-friendly CSV files.

Or, since Drupal is telling you where the problem is, you can do it manually. If you only have a few rows with apostrophes, then that may not be a big deal. If you have a lot of rows in your CSV with apostrophes or single quotes, then you've got a bit of pain.

Alternately, if you've got Python on your machine, or are willing to install it, this python script to add double-quotes to CSVs may work for you, and may be much easier than dealing with Excel macros. It all depends on the languages and language environments you're comfortable with, however.

One might argue that Drupal (or more specifically the Feeds module) should support Microsoft's CSV, and thus handle apostrophes gracefully when not wrapped in double quotes. If you are of this opinion, you may want to file a request with the Feeds project. Since there is no (standard) CSV spec, this can't be considered a bug (AFAIK), but it does seem like a useful feature that they might want to add.


Ordnance Survey OpenData In QGIS 3: Part 4

At the end of my previous post on this topic, I left you with this map of the area around the mountain of Blaven (Gaelic Bla Bheinn) on the Isle of Skye:

That concluded a three-part tutorial on using Ordnance Survey OpenData products in QGIS mapping software. (To go to the start of the series, click here.) This post, as promised last time, will deal with adding data from other sources. It’s a bit of a grab-bag of ideas—I’ll mention a few useful data sources, and various ways of importing those data into QGIS, and also describe how to import or create your own map symbols.

The major deficiency with the OS’s OpenData, from the point of view of a hill-walker, is that it lacks any portrayal of mountain paths and tracks. Fortunately, there’s another open data source available which goes some way towards remedying that—the OpenStreetMap project. Their data are free to use under the Open Database License, which requires that they be suitably credited.

To get some path data for the map above, I go to the OpenStreetMap website, and then drag and zoom to reach the area around Blaven. Then I click on the Export button at top left of the web page, which brings up a dialogue box at the left side of the screen featuring a prominent blue button marked “Export”. Above that, you can see a grey box marked up with the latitude and longitude limits of the map view you’re looking at, and the option to “Manually Select A Different Area”:

I click on the “Manual Select” option, adjust the box to select only the area around Blaven that I’m interested in, and click Export. (Selecting too large an area will generate an error message.)

My data are downloadable in the form of a file named map.osm, which I can save under a more memorable name (like blaven.osm) to somewhere in my QGIS data folders. Then I load it as new layer using Layer/Add Layer/Add Vector Layer…. When I’m asked which vector layer I want to add, I select “lines”, which will contain the path data I’m looking for (as well as some other stuff).

We can take a look at the content of this layer by double-clicking on its name to bring up the Layer Properties dialogue and looking at “Source Fields”:

It looks like “highway” is going to be the field we want to process. Now I move to “Symbology” and set up some Rule-based filters to associate markers with only the “highway” values I’m interested in—which turn out to be ‘track’, ‘path’ and ‘footway’. Like this:

I’ve set up my OpenStreetMap tracks to match my definition for Ordnance Survey tracks, and selected a grey dashed line for paths. (For a detailed tutorial on how to set up rule-based filters, take a look at Part 3 of this series, where I used them to set up different label styles for different kinds of named place.)

Here’s the final result (note that I have now added the necessary credit to the OSM data compilers):

It’s actually a better portrayal of the paths on Blaven than appears on Ordnance Survey maps. That’s sometimes the case—OSM path data is extremely variable from place to place, depending as it does on the work of volunteers either walking the routes or plotting them from public domain aerial photographs.

Now I’m going to add some symbols, but first I want to slightly tweak the position of feature names on the map. Firstly, I want the mountain names to be offset from the peaks they label (to make room for symbols to be inserted later). I double-click on the “NamedPlaces” layer to bring up its Layer Properties dialogue box, select “Labels” and then double-click the “Landform” filter to open Edit Rule. In that dialogue I select “Placement”, and then change the label placement to “Around Point” with an offset of two typographical points. (In fact, I could produce a complicated rule applying different offsets for different sizes of text, in the same way I created different sizes of text in the first place, as described in Part 3—but this simple adjustment will do as an example.)

I’d also like to get rid of that giant “Strathaird” label on the map, which is just a distraction, given that it’s not clear what feature it is intended to label. I can do this by selecting the “NamedPlaces” layer, and activating editing by clicking on the little picture of a pencil among the array of icons at the top of the screen. Then I also click on the icon for “Select Features by area or single click”.

Here they are, circled in this screen capture:

Now I can just draw a box round the offending “Strathaird” (at which point the labelled location appears as a little red cross in a yellow square), and hit the Delete key to remove it. Then I can toggle off the little pencil icon, at which point I’m asked if I want to save the changes I’ve made. (Use this facility sparingly—you don’t want to remove labels that you might need in future.) Finally, a click on the little hand icon (just above and left of the pencil in my screen-grab) restores the usual function of the mouse cursor.

The mountain names are all moved above and to the right of the peaks they label. An unwanted consequence is a shift in the labels naming the two corries—there are multiple ways to fix that, either by introducing new placement rules, or by using the layer editing facility to actually drag the labels around to where they’re wanted. But it’s not a big deal in this case, and I don’t want to get too bogged down in additional detail at this point.

So let’s just proceed to adding some symbols from an external dataset. I’ve downloaded the complete dataset of Ordnance Survey triangulation pillars in GPX format from haroldstreet.org.uk. QGIS will recognize the *.gpx file format, so we can add the data as a new layer using Layer/Add Layer/Add Vector Layer….

Once the layer is added, I want to produce a suitable symbol for the triangulation points it marks. I double-click on the layer name listed in the Layers window so as to open its Layer Properties dialogue, go to “Symbology”, and change the Simple Marker from the default circle to a triangle. I set the size to 15 points, making it roughly the same size as my text, and colour it blue and white to produce a match for the Ordnance Survey symbol for a trig point.

The OS symbol has a dot in the middle, and I can reproduce this by adding another layer to my symbol, using the green plus sign that appears on the left above the settings menu, and adding a blue dot of appropriate size on top of the triangle. Here’s the final result—a triangulation pillar on the summit of Blaven:

The website haroldstreet.org.uk provides a whole load of other useful data, including a large selection of hill summits from various lists. It also provides a dataset of mountain bothies. If you find it useful you should consider giving a donation for its up-keep—the option is offered each time you download a file.

POIgraves also offers a range of interesting data, including youth hostel locations.

Because QGIS understands the *.gpx format used by GPS receivers, we can also import routes, tracks and waypoints from GPS devices. Below, I’ve added some colour-coded summit markers from various hill lists, and superimposed the route recorded on my GPS when I ascended Blaven:

Now it would be nice to mark the car-park at the foot of Blaven, where the walk started and finished. There are various ways of doing this. The easiest, if you have a GPS receiver and are at the location, is to record a waypoint and then import the relevant file into QGIS.

Another possibility is to find the location on Google Earth, and mark it with a “placemark”—a little coloured map pin, generated using the map-pin icon at the top of the Google Earth screen. You can then export this placemark in the form of a *.kml file, by right-clicking on the location in the “Places” list at left of screen and choosing Save Place As….

The file produced uses KML (Keyhole Markup Language) which is another file format that QGIS can import as a vector layer. The terms of service for Google Earth certainly appear to give permission to do exactly this, in section 1b. But the point at which a few coordinates turn into a “derived dataset” (to which Google might object legally) is not clear to me, so I’m not going to use that approach here.

Instead, I’m going to use the old fashioned method of just looking at a map to get a set of coordinates. Checking the “Coordinates” panel at the bottom right of the QGIS display, while moving the cursor over the map location of my car park, tells me it’s located at 156064,821604. These values are given in the coordinate system for this QGIS project—which is, in fact, the standard Ordnance Survey system of eastings and northings, though probably not in an immediately familiar form. The values are given in metres, and use full numerical coordinates, rather than the familiar two-letter designator for each 100-kilometre square.

You can see the relationship between the two systems using a chart that shows the distance of each 100-kilometre square from the origin of the OS grid. So the NG square, which contains Blaven, is 100 kilometres east and 800 kilometres north. To specify a location within NG to the nearest metre, we therefore need a six-digit easting followed by a six-digit northing.

This full set of digits appears at all the corners of Ordnance Survey maps, though they go largely unnoticed. That means I can read coordinates suitable for QGIS directly from a paper map.

Taking a look at the relevant OS sheet for Blaven, I see that the car park is at NG 560216 (to the nearest 100m). So that is 1560,8216 in full numerical style (to the nearest 100m), or 156000,821600 if we add trailing zeroes to give a figure correct to the metre. Comparing this to the figure I pulled directly off QGIS (156064,821604) shows that everything is internally consistent. So I can take coordinates from a paper map and convert them to something QGIS understands. Or I can just read coordinates directly from QGIS itself.

But how do I get those figures into QGIS? I’m going to write a simple little text file of Comma Separated Values. Here it is, giving the data for the car park:

ID,Nature,X,Y,Orientation,Name
1,Carpark,156064,821604,,Blaven Car Park

The first line gives the names for each field in the dataset. ID is a unique identifier that I probably don’t really need in a tiny file like this, Nature contains information about the kind of feature I’m describing, X and Y give the coordinates of the feature, Orientation lets me specify a rotation for any label applied, and Name is … well, the name. All fields are separated by commas. The next line is the entry for my car park, using coordinates I’ve read off the QGIS map. Since I’m not interested in specifying an orientation I can leave that field blank—one comma follows immediately after another in that location.

Now I’ll add a couple more items to my list:

ID,Nature,X,Y,Orientation,Name
1,Carpark,156076,821610,,Car Park
2,Feature,153792,822358,30,Choire a’ Caise
3,Building,151379,819984,,Boat House

I save this as a text file, but with the suffix *.csv to specify its nature. Then I can load it into QGIS using Layer/Add Layer/Add Delimited Text Layer…, selecting Project CRS for the “Geometry CRS” option, and ticking “First record has field names”. You can see the little database that produces at the bottom of the Delimited Text dialogue box:

With the layer loaded, I can now set up filters and rules based on the content of the Nature field. Here, for instance, is the “Symbology” entry for the Layer Properties, showing how I’ve set up “Nature” filters. (I gave a detailed description of using this sort of rule-based labelling system in Part 3 of this series.)

I gave the car park its own symbol, I formatted Choire a’ Caise so that its text matched the other corries, and the Boat House so that it matched other buildings. Here’s the result, with the new features circled:

QGIS provides a good selection of different symbols, but I designed the car park symbol myself, to roughly match British road signs:

If you don’t fancy drawing your own symbols, you can usually find suitable Public Domain images, like this one on Wikimedia Commons.

Symbols need to be in *.svg (Scalable Vector Graphics) format. The Wikimedia symbol I linked to above already is, but if you’re faced with a *.jpg or *.png symbol (like the one I produced), then there are many free and easy-to-use conversion utilities on-line—I used this one. Once you’ve produced your *.svg file, copy it into a sub-folder of the QGIS program directory on your hard drive. For QGIS 3, the sub-folder is /apps/qgis/svg/, which contains a number of themed sub-folders. For lack of a better idea, I dropped my carpark.svg into the /symbols sub-folder. Once there, it became available to me when editing “Symbology”—by changing the “Symbol Layer Type” to SVG Marker, I was able to scroll down and find my new symbol amid the pre-existing selection.

Finally, I confess that an Ordnance Survey map always looks naked to me without a superimposed one-kilometre grid, which is also an aid to judging scale. Charles Roper has produced a Public Domain set of ESRI shape files for the Ordnance Survey grid. The main trick to using these grid files is to select “Transparent Fill” for the fill colour—otherwise you’ll just end up with an opaque tiling that obscures everything else! ( I dealt in detail with managing shape files in Part 1 and Part 2.)

So here’s the final map. There are still things that could be improved—for instance, the ability to edit layers in QGIS goes far beyond simply being able to delete unwanted labels, as I did above. But I hope I’ve shown you how easy it is to produce useful and attractive UK maps using only open data sources.


Import new archival descriptions via CSV¶

The following section will introduce how an archival description CSV of new records can be imported into AtoM via the user interface. AtoM also has the ability to use a CSV import to update existing descriptions - for more information on this, see below .

When importing new records, AtoM can also check for existing records that seem to match the descriptions you are about to import, and skip these records if desired - they will be reported in the Job details page of the related import job (see: Manage jobs for more information). This can be useful if you are uncertain if some of the records in your CSV have been previously imported - such as when passing records to a portal site or union catalogue. For more information on the criteria used during a CSV import to identify matches, see below, Matching criteria for archival descriptions .

Before proceeding, make sure that you have reviewed the instructions above, to ensure that your CSV import will work. Here is a basic checklist of things to check for importing a CSV of archival descriptions via the user interface:

  • CSV file is saved with UTF-8 encodings
  • CSV file uses Linux/Unix style end-of-line characters ( /n )
  • All parent descriptions appear in rows above their children
  • All new parent records have a legacyID value, and all children include the parent’s legacyID value in their parentID column
  • No row uses both parentID and qubitParentSlug (only one should be used - if both are present AtoM will default to using the qubitParentSlug)
  • Any records to be imported as children of an existing record in AtoM use the proper qubitParentSlug of the existing parent record
  • If you have physical storage data in your CSV, you have ensured that all 3 physical storage columns are populated with data to avoid the accidental creation of duplicate storage locations (see above, Physical object-related import columns )
  • You have reviewed any other relevant data entry guidelines in the section above: Prepare archival descriptions for CSV import
  • You have reviewed how the authority record matching behavior works above, and know what to expect with your import.

If you have double-checked the above, you should be ready to import your descriptions.

To import a CSV file via the user interface:

  1. Click on the Import menu, located in the AtoM header bar , and select “CSV”.
  1. AtoM will redirect you to the CSV import page. To import new archival descriptions, Make sure that the “Type” drop-down menu is set to “Archival description” and the Update behaviors drop-down is set to “Ignore matches and create new records on import.”
  1. AtoM can check for existing records that seem to match the descriptions you are about to import, and skip these records if desired - they will be reported in the Job details page of the related import job (see: Manage jobs for more information). To enable this option and skip matched records, click the checkbox labelled “Skip matched records.”
  2. If you do not want your files indexed during the import, you can click the checkbox labelled “Do not index imported items.” This will prevent the new records from automatically being added to AtoM’s search index.

If you do not index your records during import, they will not be discoverable via search or browse in the user interface! You will need to know the exact URL to reach them. To make them visible in the interface again, a system administrator will need to rebuild the search index. See: Populate search index .

  1. When you have configured your import options, click the “Browse” button to open a window on your local computer. Select the CSV file that you would like to import.
  1. When you have selected the file from your device, its name will appear next to the “Browse” button. Click the “Import” button located in the button block to begin your import.

Depending on the size of your CSV import, this can take some time to complete. Be patient! Remember, you can always check on the status of an import by reviewing the Job details page of the related import job - see: Manage jobs for more information.

  1. After your import is complete, AtoM will indicate that the import has been initiated. A notification at the top of the page will also provide you with a link to the Job details page of the related import job. Alternatively, you can click the “Back” button in the button block at the bottom of the page to return to the CSV import page, or navigate elsewhere in the application.

Want to find your recent imports? You can use the sort button located in the top-right hand side of the archival description browse page to change the results display to be ordered by “Most recent” if it is not already - that way, the most recently added or edited descriptions will appear at the top of the results. If you have come directly here after importing your descriptions, they should appear at the top of the results.

  1. If any warnings or errors are encountered, AtoM will display them on Job details page of the related import job. Generally, errors will cause an import to fail, while warnings will be logged but will allow the import to proceed anyway. Errors can occur for many reasons - please review the checklist above for suggestions on resolving the most common reasons that CSV imports fail. In the example pictured below, the CSV includes a qubitParentSlug value for a description that does not exist - so AtoM cannot attach the CSV row description to its intended parent:

Images in CSV file don't show up in Anki

I'm trying to import a CSV set of notes that include image references into Anki.

I have a file with Cloze substitutions and in one note I am using an image in the text:

I've put duck.jpg in the media.collection folder (along side a lot of working media), but when I import it I see a broken image icon inside Anki.


Prepare events for CSV import¶

The Events CSV import can be used to supplement the types of events that associate an actor (represented in AtoM by an authority record ) and an information object (represented in AtoM by an archival description . In AtoM’s data model, an archival description is a description of a record, understood as the documentary evidence created by an action - or event. It is events that link actors to descriptions - see Entity types for more information, and see the section above for more information on actors and events in the archival description CSV: Creator-related import columns (actors and events) . The Events CSV can be useful for adding other event types to relate actors to descriptions, such as publication, broadcasting, editing, etc. At this time, the events import will only work with archival descriptions that have been created via import.

The event import processes 3 CSV columns: legacyId, eventActorName, and eventType. The legacyId should be the legacy ID of the information object the event will be associated with. The eventActorName and eventType specify the name of the actor involved in the event and the type of event. An example CSV template file is available in the AtoM source code ( lib/task/import/example_events.csv ) or can be downloaded here:

Before proceeding, make sure that you have reviewed the general CSV instructions above, to ensure that your CSV import will work. Here is a basic checklist of things to check before importing a CSV of events:

  • The target description was imported using either the command line or the CSV import in the user interface - events import will not work with descriptions created in the user interface.
  • The CSV file is saved with UTF-8 encodings
  • The CSV file uses Linux/Unix style end-of-line characters ( /n )
  • All legacyID values entered correspond to the legacyID values of their corresponding archival descriptions
  • The events CSV file should be renamed to match the source_name value of the previous import. See above for more information, Legacy ID mapping: dealing with hierarchical data in a CSV .
  • If you are referencing existing authority records already in AtoM, make sure that the name used in the actorName column matches the authorized form of name in the authority record exactly. See above for more information on how AtoM attempts to identify authority record matches: Attempting to match to existing authority records on import .

If you have double-checked the above, you should be ready to import your events.


Export¶

phpMyAdmin can export into text files (even compressed) on your local disk (or a special the webserver $cfg['SaveDir'] folder) in various commonly used formats:

CodeGen¶

NHibernate file format. Planned versions: Java, Hibernate, PHP PDO, JSON, etc. So the preliminary name is codegen.

Comma separated values format which is often used by spreadsheets or various other programs for export/import.

CSV for Microsoft Excel¶

This is just preconfigured version of CSV export which can be imported into most English versions of Microsoft Excel. Some localised versions (like “Danish”) are expecting “” instead of “,” as field separator.

Microsoft Word 2000¶

If you’re using Microsoft Word 2000 or newer (or compatible such as OpenOffice.org), you can use this export.

JSON (JavaScript Object Notation) is a lightweight data-interchange format. It is easy for humans to read and write and it is easy for machines to parse and generate.

Changed in version 4.7.0: The generated JSON structure has been changed in phpMyAdmin 4.7.0 to produce valid JSON data.

The generated JSON is list of objects with following attributes:

Type of given object, can be one of:

header Export header containing comment and phpMyAdmin version. database Start of a database marker, containing name of database. table Table data export. version ¶


Importing csv file in QGIS including names and description - Geographic Information Systems

Enquire Now

Advanced Data Handling

  • Supported file formats:
    • Excel, text, csv
    • SQL Server, MS Access
    • MapInfo tab
    • XRF data import utility
    • Validate data using the Data Doctor tool
    • Join/append new data into current workspace
    • Random data subsampling
    • Apply templates to multiple datasets or project areas
    • Track workflow progress using saved checkpoints

    Dynamic Graphical Environment

    • Display sample information for up to three variables using colour, shape and size attributes
    • Live update across all displays based on selections and attributes within your data
    • Render groups of data visible and invisible on-the-fly
    • Save interpretation work as visual attributes and distribute attribute information in the dataset to others within your organization

    Statistics, Graphs, Maps

    • Summary statistics reports that respond dynamically to parameter selection, data visibility, data grouping and multivariate or univariate modes
    • Frequency tables, Pearson’s and Spearman Rank correlation matrices and regression analysis
    • Visualise data relationships using XY, probability, ternary, box and scatterbox plots
    • Point density heat maps

    • Visualise your interpretation spatially in an attribute map
    • Compare high or low concentrations across multiple elements using variable maps and gridded images
    • Examine distributions and levels of target elements relative to background values with thematic maps coloured by “times-background” operation

    Inbuilt Reference Libraries

    • Classification diagrams: rock, element ratio PER/GER, alteration, regolith, Ni and Cu exploration, IOGC, metamorphic, geometallurgy and diamond indicator
    • Calculations: CIPW norms, exploration indices, hydrothermal alteration, molar and petrogenetic ratios, REE and regolith
    • Spider normalisations: chondrites, crust, mantle, CRM standards
    • Templates: aiSIRIS spectral data diagrams, graphs and mapped minerals
    • Mineral, rock standards and OSNACA composition nodes

    Drillhole Data Tools

    • Display your drillhole data using single and multi-trace downhole and line plots
    • Visualise your drillhole data in 3D space using our 3D Attribute Map
    • Wavelet Tessellation, introduced in 7.1, is a multi-scale edge detection method that makes the link between a given variable and lithology logs

    Quantitative Techniques

    • Principal components analysis
    • Mahalanobis distance calculations
    • Discriminant projection analysis
    • Auto domain classification
    • Classification and regression trees
    • Self-organising maps
    • Regression analysis
    • K-means clustering
    • Data levelling
    • Tukey outlier identification
    • Anomaly assessment tool
    • t-SNE
    • Robust multivariate algorithms (including Fast MCD).

    Structural Data Tools

    • Stereonets: orthographic, equal angle and equal area
    • Plot planes and lines
    • Calculate means, great circles (Bingham statistics), β axis and canonical best fit: “small circles”
    • Point density contouring
    • Rose plots
    • Alpha beta gamma conversion.
    • Operating System
    • Windows 10 x64 (64-bit) or Mac OS X 10.8.3 or later (Java 1.8 is bundled with installer)
    • CPU
    • A Multi-Core processor is recommended
    • RAM
    • 6+ GB recommended, 1 GB minimum required.
    • Graphics
    • Performance may vary with graphics card
    • Printer/Plotters
    • Uses operating system defaults
    • Installation Permissions
    • Must be installed while logged on with Administrator permissions
    • Installation Disk Space
    • At least 300 Mb of free space on the Program Files drive is required for the installation process.
    • Network
    • Internet required to download software and receive licence key. Internet not required to run ioGAS™.
    • Supporting Software
    • Microsoft.NET framework 4.61 or above required for acQuireDirect link (if target machine is lower, installer will update)

    Brochure

    Quick Start Guide

    Latest update

    IoGAS™ 7.3

    Download the latest version of ioGAS™ by clicking on the link below. The software is installed into a new folder and existing users* must have a valid licence file in order to run the latest version. A two week trial period is available for new users. Select more information to see what’s new in ioGAS™ 7.3 or refer to the Help file within ioGAS™.

    *Existing users can also download the latest version of IMDEX ioGAS™ via Check for Updates on the Help Ribbon within ioGAS™.

    Integrations

    Leapfrog Live Link

    IMDEX and SEEQUENT have combined to produce the ioGAS™-Leapfrog live link for rapid 3D visualisation of geochemical data in real-time. Geoscientific data can be analysed in ioGAS and then visualised and modelled in the 3D environment using Leapfrog Geo. Geochemistry parameters can be added as new attributes and transformed into 3D interpolants for enhancing geological models.

    The ioGAS™ Link is sold and licensed as a separate add-on to the Leapfrog Geo software. The link will only run with an active licence of Leapfrog Geo, enabled for the ioGAS™ Link. You will also need Leapfrog Geo v1.3 or later and ioGAS™ v5.0 or later. For further information including how to obtain a trial version of the Leapfrog Geo software or to purchase the ioGAS™ Link please contact your local Leapfrog sales team.

    QGIS Plugin

    This plugin is developed by IMDEX and includes a live link to view and refresh data in ioGAS and QGIS in real time. Alternatively, .gas files can be imported into QGIS as temporary scratch layers or GeoPackage files. The plugin is compatible with Windows or Mac OS operating systems and requires a long term release installation of QGIS 3.10 or later and ioGAS 7.3.

    ArcGIS Pro Add-In

    This add-in is developed by IMDEX to import ioGAS™ attribute map symbology and supporting data into ArcGIS Pro as a point layer. Data is imported as an attribute point feature class in the project default geodatabase. Requires installed versions of ArcGIS Pro 2.0 and ioGAS™ 5.1 or later.

    Geoscience ANALYST Live Link

    Geoscience ANALYST is a free 3D visualisation and communication software for integrated, multi-disciplinary earth models and data developed by Mira Geoscience. The ioGAS™ for Geoscience ANALYST link is available in Geoscience ANALYST Pro, an add-on module which offers object and data editing and creation functionality, data analysis, interpretation tools and utilities.

    The link is purchased separately through Mira Geoscience and activated via the Geoscience ANALYST Pro licence. Contact [email protected] for more information.

    GOCAD® Mining Suite Live Link

    GOCAD® Mining Suite is a customised extension of the SKUA-GOCAD™ – Paradigm® software product developed by Mira Geoscience and used for the interpretation and modelling of geological data. The ioGAS™ for GOCAD® Mining Suite link enables data to be worked with in real-time and for the changes to be viewed in both programs.

    The link is purchased separately through Mira Geoscience and activated via the GOCAD® Mining Suite licence. Contact [email protected] for more information.

    AcQuire GIM Suite Integration

    Import data directly from an acQuire GIM Suite database using the acQuireDirect API into ioGAS™. The API enabled users to import data using a pre-existing section file or by manually choosing drillhole or point samples based on selection criteria. Requires installed versions of ioGAS™ 7.0 or later, the acQuireDirect link component and access to acQuire GM Suite database.

    Micromine 2020 Integration

    The Micromine team has teamed up with IMDEX’s ioGAS™ to implement the ability to import ioGAS™’ native .gas files, directly into Micromine for our next major release, Micromine 2020.

    Datamine MapInfo Discover Integration

    Datamine Discover is an add-on module for MapInfo Professional®. Within the Disocver import menu is a special utility to import ioGAS™ data directly into MapInfo and plot the sample locations in geographical space displayed with the last saved attribute symbology. Separate legend tables are also created during import.

    Subsequent changes in ioGAS™ can be made and then saved with the updates displayed using the Discover ioGAS™ import utility refresh option. ioGAS™ can also export data as Tab files which can be opened directly in MapInfo.

    Minalyzer CS Integration

    Export high-resolution geochemistry data captured from the continuous XRF scanner Minalyzer CS via Minalogger, the Minalyze web-based drill core visualization software, as native .gas files that can be read directly in ioGAS™.


    Watch the video: Importing CSV Excel Coordinates into QGIS