Leapfrog Edge - Better than Vulcan for Resource Estimation?

I currently use a combination of LeapFrog Geo, Snowdens Supervisor, Excel and Vulcan to produce a block model and estimate a resource. I’m curious if LeapFrog Edge is significantly better or if anyone had used it and loved it? Or if anyone knew how it compared to Vulcan?

Answers

  • renanglopes
    edited November 2020

    Here in Yamana Gold - Jacobina we are used to use Vulcan to estimate resources. In the next weeks we will try to use Edge fot the statistics, resources estimate and validation (drift analysis etc) to enjoy the integrated system (modeling + estimate) of Leapfrog.
    After watching some webinars, we know that, for a while, the Edge won’t make all the proccesses that we need, because here we work with unfolding process and we make the estimation in various passes.

    I will soon return to share our experience considering Vulcan and Edge.

  • adding this request to the list; the ability to run Variography on Log of the Data…
    in nuggety deposits using the row data without Ln is distracting, in my data i had to export row data, transform to Log data and re-import and run the variography, then transform the variogram parameters back to reflect the normal data (even with the models do not look nice after back transform).
    It would be a great tool to add.

    I have downloaded LF5 and enjoyed the NS transfer, I like it. thanks for such great improvement.
    now the multi pass kriging is also required. the RBF needs more explanation in the help.
    finally in the RBF there is no average distance metadata, how to classify?
  • We will be releasing transform variography in the near future - which provides the ability to model on normal scores transformed data, and back transform models into raw data space. We have chosen to implement NS transformation rather than log transforms, as it is a more general solution (and will be utilised in other features in future).

    Multi-pass kriging can be easily implemented using the Combined estimator. Inside a domained estimator, create your ‘base’ kriging estimator. Once you are happy with that, copy the ‘base’ estimator (call it something like ‘pass2’). Modify the search parameters as required (increase search, relax parameters etc). Repeat for pass 3 etc as you wish. Merge these passes into a Combined estimator, with ‘base’ estimator as highest priority. Can also combine multiple domains for the same variable into the same Combined estimator. When evaluated on a block model, the combined estimator allows you to easily visualise domain, estimator within that domain, and the status (for example, whether all blocks in domain have received a grade estimate).
    We will review the help doc on the RBF.

    Because the RBF is a global solution, there is no associated ‘neighbourhood’ information inherent in the estimator. But average distance metadata can be easily generated/stored from an ID or kriging pass.
    Thanks for your feedback

  • You may want to try variable orientations in Leapfrog Edge when the principal directions of mineralisation change across the domain (http://help.leapfrog3d.com/Geo/5.0/en-GB/Content/estimation/variable-orientations.htm?Highlight=variable%20orientation 2).

    It is also possible to do multiple passes in Leapfrog Edge. First have to create New Domained estimations of each of the passes. I included here a snap shot of the first 2 passes with varying maximum, intermediate and minimum passes.


    You may also have to vary other parameters such as minimum number of samples, maximum samples, sector search, etc. under the search tab.

    After creating the individual estimators (Pass 1, Pass 2, etc.), you need to create a combined estimator (http://help.leapfrog3d.com/Geo/5.0/en-GB/Content/estimation/estimators.htm?Highlight=combined%20estimator#combined-estimators 1) for the passes.


    You will then evaluate this combined (multi-pass) estimator to your block model.


    the beauty of having it in Leapfrog Geo-Edge is you can do all your domaining, variography, estimation and reporting without having to export and post process in another software. I would be interested to see your comparisons for Vulcan and Leapfrog Edge.

  • I have begun using LF Edge for the full estimation process, right down to the block modelling (because the coding process is much easier in LF).

    I find Vulcan clunky, not user friendly. Part of it is that the program is weighed down by decades of programming, but part of it is just too much engineering - literally. I consider Vulcan an engineers tool, LF is a geologists tool.

    I just completed my first estimate completely in LF, geological model to block model. MUCH faster when assessing the data and modelling params, we had a tight timeline and I don’t think it would have been possible in Vulcan without working 16 hours days that week. Also, LF has much better transparency and referencability (I’m sure there is a better word but I am still pre coffee). Its much easier to write up afterwards because you can easily follow the flow, or show and justify what you did and why.

    My biggest stumbling block was getting it all into a Vulcan block model. But we finally figured out a few things to make it relatively quick (still took a couple hours) - everything in headers needs to be lower case, you need to make a block definition file to match your LF block model, and you need to match a couple things in the csv layout to Vulcan. Now that I have that down I am very comfortable with the process.

  • I guess it will really depends on the size and complexity of the project/deposit. I’m not a Vulcan user but I assume it has the standard functionalities of other packages.

    I guess Edge is perfect for simple projects with not so big amount of data and a couple of variables.
    It can definitely work with some complex projects, but not all.

    Here are some issues I’ve experienced/seen/been told with larger and complex projects:

    • Manually setting up thousands of estimators: Imagine you have 40 domains x 8 variables x 4 estimation passes (3kr, 1 nn). You will end up setting up 1280 estimators. Even if you copy estimators, you will need to adjust each search, samples, etc and this requires tens of thousands of clicks. The chance of manual errors here is guaranteed
    • Lack of macro/script: I guess the one above is a result of the impossibility of this. There is no way you can script your searches and estimation parameters. Imagine you need to change the minimum of samples, octants search etc for the 1280 estimators above…yes, you will need to click and change all of them. I guess there is a “master table” under development and will solve some of these problems, but not all.
    • Sub-optimal sub-blocking: The way edge triggers sub-blocks is not ideal: it generates the maximum amount of sub-blocks within a parent block if the surface touches the block. This generates LF Projs with tens of GBs of size even for small deposits with narrow veins. There should be a way to optimize sizes and reducing the number of subs.
    • Pre-processing graphs: Sometimes you need to open the estimator and it freezes because it needs to reprocess the contact plot. It can be annoying having to wait for many minutes just to check some parameters. The same can happen with variograms (but we do have the option of auto process) and swath plots.On swaths, any change in the block model triggers the pre-processing of ALL swath plots. This happens even if there are zero changes in the estimation domain (I assume because LF process all numeric values for all slices, even if you are not plotting them). I had a project with 80+ swaths and it would take 1h each (you can create the same swaths in Python that are 60x faster and only run when you require)
    • Processing performance: There has been some performance comparisons with other software and there are huge differences between Edge and others. I’ve seen here some in the order of tens of times slower (hours instead of minutes; days instead of hours). I’m not sure exactly this comes from the kriging algorithm as Edge, someone correct if I’m wrong, probably uses kt3d from Clayton Deutsch and CCG (which is damn fast!). There are some redundant filtering (on the works now?!), block model manipulation and some other background stuff slowing down the overall performance.

    To be fair, I myself use Edge for a good portion of the projects I’ve been working for the last years and think that the potential is enormous. I guarantee that final results are very good and basically the same as other packages. But we have to be aware of its limitations for some sizes and complexities (I’m sure the team is making lots of changes and always developing cool stuff for us  :) )

  • As per the comment by user @DhanielCarvalho, once you have more than one or two estimation domains x multiple estimated variables x multiple estimation methods... it is time to consider switching to a 3D block modelling software that allows for open source management of estimation parameters. Vulcan or MineSight 3D both allow for these workflows. 
    The potential of having errors in the 'labour intensive' process of copy/pasting domains, clicking on folders, editing items all while waiting for LF Geo tp process charts in the background is significant. Not to mention time consuming and taxing on your 'work momentum'.
    Interested to understand if the "master table" will be editable in excel for importing as *.csv?

    A second complaint would be the ability for the user to script inplace calculations e.g.:
      - IF(isblank(FE)) then FE=0 ELSE FE)


    To summarize, the Edge module does the job but there needs to be a way to elegantly manage estimation parameters when projects are complex with respect to multiple domains/estimators/estimation methods.
    Calculations should be expanded to allow the user to script inplace calcs.
  • Hi Richard; 

    Thanks for your comments and questions.

    Making the parameter table fully editable is being actively developed this iteration. From our support system, I can see you are in contact with Stephan who will respond to you in due course as we have some follow up questions both I and our Technical Domain Expert, Mike Stewart, are keen to put to you. 

    Until then; best regards
    Rachel
  • What’s the best way in LP Edge of estimating in a Kriging plan, where the successive estimation passes with larger (more relaxed) search radii, estimate into blocks which have been unestimated on prior passes?

    I am used to doing this with ease in Isatis and Vulcan, employing conditions and writing syntax’s in the calculator; but have not yet realised how to do this in Edge.

    Thanks in advance.
  • @HaydenEyers

    There is a feature called "Combined Estimator" (right click on the Estimation folder and the option is in that menu).   



    Click here for the Help topic and more details explanation.

    This feature allows the user to build a hierarchy of estimates - so, for example in your case, the top estimate would be the most restricted pass (in terms of search radii). The last estimate would be the most relaxed pass. When a combined estimator is evaluated onto the block model the blocks that can get populated with a value from the top-most estimate in the combined estimate. The blocks that can't look to the next estimate in the combined estimator and so on until all blocks that can be get populated. 

    The help documentation illustrates this in detail. 

    Hope this helps, 
    Rachel

  • Thanks for the reply and details @RachelMurtagh ! I will take this further with the documentation.
Sign In or Register to comment.

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!