Facet Mesh cache
I'm working on a plugin where I want to have one primitive interact with the surface of another when it renders. Without some optimization code to divide a detailed facet mesh into sub-boxes it's too slow. If the user or animation changes the mesh then I'll need to detect that and rebuild the cached optimization data once during a rendered frame.
The best way I could think of to detect that change was to keep a record of the pointer address to the FacetMesh. If the mesh changes then the pointer changes. Unfortunately during testing I found that Carrara would crash or behave oddly - but only during the debug. I would get an unknown error result or strange behaviour and the [Ctrl] key was locked. A free execution outside the debug worked. I know from previous experience that a FacetMesh can be sensitive.
I tried keeping a record of the pointer of either
TMCCountedPtr pCachedMesh
or void * pCachedMesh
then compared it to the selected object's FacetMesh address in the test and both worked but appear to be unsafe.Does anyone have advice or know of a better way to detect changes to maintain a cache of data ?
Comments
Hi Sparrowhawke,
I tried to see if you could use the rendering cache used by Carrara to detect if, but documentation is really poor.
If I were to do that, I would compute a hash key using the points coordinates and index (something like the sum of index*1000 + x*100 + y*10+z). Quite fast and reliable. If not fast enough, it can be easily spawned over several threads (but I don't think it's necessary).
Philemo
Thanks for the suggestion Philemo, a checksum or hash key might be a possible solution. The optimization must work for each ray that hits the primitive so I don't want to be checking if the other FacetMesh is exactly the same vertex by vertex.
I looked through the SDK but couldn't find any way for my primitive to be informed when a render is about to start. A time or frame change is easy to detect. There is a I3DMasterListener class - I don't know how to code that but this alone would not let me know if any deformers have changed the instance of the surface I want to interact with.
I'm not sure, but maybe you could try to implement your own subclass of TFacetCache to store your optimized data and add it to the facet mesh.
If it works, this class will be notified every time it needs to recalculate (method "invalidate"). A little try and error might be useful to find out the range of identifiers used by the renderer.
Plugins are allowed to implement other classes but not to change them and Carrara wouldn't be able to recognize or see the change. Thanks for the suggestion and you have given me the idea to take a closer look at TFacetCache in FacetMesh::fRenderCaches - it's purpose is not clear and it may only be used in the interactive preview renderer. A poke around and looking at the data might turn up something I can use.
Running a debug and inspecting the values, through the crashing and errors that it causes, I did find that with some deformers the FacetMesh pointer did not change even with a slight change in the mesh. I can also look at obvious changes to the bounding box, vertex count and the transform to rebuild the cache but this won't pickup the changes made by morphs or simple animation of the surface I want to interact with.
The FacetMesh pointer may not turn out to be a 100% reliable way to detect the change. Whether I use any counted pointers or not as soon as I use the operator == or != to compare the pointers this is what triggers the fault.
The TFacetCache in FacetMesh::fRenderCaches was always empty when I looked at it during a debug.
Following up on the possible I3DMasterListener lead I figured that checking if the master changes will not be enough because a deformer on the instance could change. This drew me to the idea of the other change channels which should solve my problem. If I listen in on the I3DShScene::GetTreePropertyChangeChannel() then if anything changes in the parameters, hierarchy or transforms etc I can mark my cache as invalid and then when the render begins or time changes rebuild it once per render frame.
However I've got some problems with implementing IChangeListener and that is a separate topic...
It's always a good idea to use standard (supported) methods when available.
For another project of mine, I'm interested in that change listener. I'll be glad to help (or try to) if you post what your problem is.
Hi Philemo,
I think I've almost got enough of an understanding and the right code for using IChangeListener. In the samples project for the SimpleModeler, modeler.cpp and modeler.h this is covered. I wasn't sure about how to implement it into my own Primitive - in particular whether it will be safe to register the channel inside the constructor and then unregister in the class destructor with all the cloning that goes on. I'll have to study that example thoroughly and look into every relevant file that is included to make sense of the messages in the change channel and how to use it properly. If I get really stuck I'll call for help !
My cache is almost working but I've run into another problem. It is not thread safe. SMP Threading with the SDK is something I do not really understand.
I've created a class that uses an axis aligned bounding box subdivision optimisation to make a Quadtree or Octree. The cache is invalidated by any scene change (flagged using the listener) or a time change as described above.
I'm using it in the context of a volumetric primitive that interacts with a surface. During the render as each ray is passed through my volume I need to test for intersections of that ray with the surface and so that needs to be sped up with the Quadtree cache of the facets for a very detailed mesh. When multi-threading is enabled it crashes because different threads interfere with the cache class variables that are in the middle of searching through the Quadtree. If I change the code and take those critical variables out and make them local/stacked in the volumetric primitive call then it works.
So where I need help is in knowing which parts and classes and their contents are thread safe, which aren't and how to make them thread safe.
Here are the essential bits of code. I'm unsure of some of the declarations as well as how it will behave in a threaded context:
Firstly my cache class. I implemented ::TMCSMPCountedObject but it seems that isn't enough to protect the class variables.
The AASUBBOX class holds the information for all the Quad/Octree boxes when I divide the mesh up and ::Sort() all the facets into their boxes. I use an array for the tree with jump indices rather than pointers. Later in the optimized search for the next facet I need to know which box is currently open and what was the index of the last facet. If that is changed by a thread it will cause all kinds of chaos and crash.
My UINT32STACK is also used during both the sorting and searching to push boxes that need to be opened and checked later as I go deeper into the cache tree. This was the most likely one to go wrong. I tried using a TMCSMPArray but the SDK documentation on that is unclear. This works as it is - but still isn't thread safe - I wouldn't be surprised if my use of the TMCSMPArray is incorrect.
My volumetric class contains a TMCCounterPtr pSubBoxMesh which I create with a helper in the plugin constructor and copy when the class is cloned. When the ray hits the volume I build the cache if it is invalid then use it to start the optimised face search. I'm also worried that the cache build could be interrupted by another thread so first thing I do is reset the invalidate flag. I tried placing that into a 'critical section' but that only resulted in a crash.
I would be grateful for any advice or suggestions about how to make the code thread safe.
(edit:) In afterthought the case may be that the cache data is protected from thread interference and that is causing the error and my confusion.
Hey Sparrowhawke,
As I understand it, your volumetric isn't an instance/thread so the trick is to make that search state data pinned to a specific thread. Easiest solution is avoid making that stuff a class variable. Can you instance it in whatever call your object is evaluating and just pass it around?
<br /> //didn't bother to look up real interface<br /> void calculateVolumetricColor(stuff) {<br /> CacheWalkingState state = new CacheWalkingState();<br /> <br /> interestingStuff = myCache.walkTree(state);<br /> <br /> }<br />
Regards,
Hi Eric,
Thanks for some advice.
I re-read over some articles and chapters to make sure that I have a better understanding of multi-threading. Any of these references make the problems and practises of a multi-thread environment clear but they refer to their own API. Without some examples of Carrara SDK code to look at it is hard for me to use or find the equivalents or be sure of what they do, nor how to declare or use them properly. What I have learned is that not everything is in the SDK html doc reference and a file contents search and opening up the .h files can turned up more information.
I did find a mistake in the code I tried to use for the critical section. I failed to create a new section, so the code should read:
My cache needs to be shared by all of the threads. The first thread that gets to that needs to stop the others from proceeding, reset the flag, and then build the cache. Each render tile starts up a thread. In the search through the sub-box tree that follows each of the threads will need its own local data, stack etc. I'm not sure what is protected by using TMCSMPCountedObject but it appears that any of the threads can access and change the contents.
I've started to run some experiments with the debug and rendering with as small a tile size as I can, trying out the permutations, looking at the memory addresses and trying to figure out what is cloned and shared across the threads. Perhaps I can figure out what is going on from the results.
EDIT:
I had this code very wrong and misunderstood the use of CWhileInCS and the scope of TMCCriticalSection *theCS which must be a global variable in the plugin or a non-static variable of the plugin class. All of the threads need to share the same critical section variable. I misread the docs and thought that I had to declare a local variable and give it to CWhileInCS.
Hey Sparrowhawke,
I have some stuff that might help, but first a question. Is there an instance of your object/thread or single object/many threads?
Thanks,
Hi again Eric,
I have a single object with many threads. It would also need to work for multiple scene instances, each with their own cache and many threads.
I ran some tests and so far it all points to needing to use a TMCSMPCountedObject for the cache class but only a TMCArray for the stack. If I use a standard TMCCountedObject then it crashes. The class MyVolume::Clone() was not called during the multi-thread render and its 'this' pointer did not change nor did the pointer to the cache pSubBoxMesh. The TMCSMPCountedObject---fRefCountAC value did not go up either. So the class and its data must be shared across all of the threads. I don't really understand what the Atomic Counter is and does though and how relevant that is.
With the correct code for that critical section with an invalid cache flagged it only entered that section once to build the cache in the multi-thread render - so that appears to be safe.
I'm close to having the correct code now through this trial and error. The renders are working - but I appear to have memory leaks and some random crashes. If there is still something wrong then eventually I expect the threads will get tangled up. I've yet to test if I get some of those problems with the multi-thread render disabled.
Hey Sparrowhawke,
Here's what I did for a similar issues.
For the first issue (building the cache for first one in) I built a template that handles all the cache management and threading issues. I use it for basically all those edge or mesh based shaders (Goos, WFP, Bevel, Procedural Lock, Terrain Tools, etc.) The difference is they are all one object/thread, but I think it will work for one object/multiple threads too and if it doesn't maybe it will give you some ideas. The only thing that gives me pause is the way I use the globalStorageKey. You might have to declare a local instance to use for getCache vs. trying to use the one already floating around in the cache client.
The template. It works with three different objects; your cache (T), what do you need to build your cache (S), and a unique descriptor of the cache (object, settings, etc.) or key(K).
CPP is just #include "DCGSharedCache.h"
<br /> #ifndef __DCGSharedCacheClient__<br /> #define __DCGSharedCacheClient__<br /> <br /> #if CP_PRAGMA_ONCE<br /> #pragma once<br /> #endif</p> <p>#include "MCClassArray.h"<br /> #include "IShSMP.h"</p> <p>#ifdef _DEBUG <br /> //#include "Windows.h"<br /> //#include <stdio.h><br /> #endif</p> <p>template<class T, class S, class K> class DCGSharedCache;</p> <p>template<class T, class S, class K> class DCGSharedCacheClient {</p> <p><br /> private:<br /> DCGSharedCache<T, S, K>& globalCache;</p> <p><br /> public:<br /> DCGSharedCacheClient<T, S, K>(DCGSharedCache<T, S, K>& globalCache) : globalCache(globalCache) {<br /> cacheElement = NULL;<br /> };</p> <p> ~DCGSharedCacheClient<T, S, K>(void){<br /> };</p> <p> virtual void fillElement(T& newElement, const S& dataSource) = 0;<br /> virtual void emptyElement(T& oldElement) = 0;</p> <p>protected:<br /> K globalStorageKey;<br /> T* cacheElement;</p> <p> void releaseCache() {<br /> if (cacheElement != NULL) {<br /> globalCache.stopUsing(this);<br /> cacheElement = NULL;<br /> }<br /> }</p> <p> void getCache(const S& dataSource) {<br /> cacheElement = globalCache.getCacheItem(globalStorageKey, this, dataSource);</p> <p> }</p> <p>};</p> <p>template<class T, class S, class K> class DCGSharedCache {</p> <p>TMCCriticalSection* myCS;</p> <p>public:<br /> DCGSharedCache<T, S, K>(void) { <br /> };</p> <p><br /> ~DCGSharedCache<T, S, K>(void){<br /> };</p> <p>private:<br /> struct TypePlus<br /> {<br /> T* element;<br /> K key;<br /> boolean active;<br /> TMCArray<DCGSharedCacheClient<T,S, K>*> usedBy;</p> <p> TypePlus(void) {<br /> element = NULL;<br /> };</p> <p> };<br /> TMCClassArray<TypePlus> elements;<br /> <br /> T* fillCachedItem(const K& key, DCGSharedCacheClient<T, S, K>* usedBy, const S& dataSource, TypePlus& element) {<br /> element.element = new T();<br /> usedBy->fillElement(*element.element, dataSource);<br /> element.key = key;<br /> element.usedBy.AddElem(usedBy);<br /> element.active = true;<br /> return element.element;<br /> }</p> <p>public:<br /> void init() {<br /> myCS= NewCS();<br /> };</p> <p> void cleanup() {<br /> elements.SetElemCount(0);<br /> DeleteCS(myCS);<br /> };</p> <p> void stopUsing(DCGSharedCacheClient<T, S, K>* usedBy) {<br /> CWhileInCS cs(myCS);<br /> uint32 elementCount = elements.GetElemCount();<br /> if (elementCount > 0) {<br /> boolean hasActive = false;<br /> for (int32 elementIndex = elementCount - 1; elementIndex >= 0; elementIndex--) {<br /> TypePlus& element = elements[elementIndex];<br /> uint32 usedByCount = element.usedBy.GetElemCount();<br /> uint32 newUsedByCount = 0;</p> <p> for (uint32 usedByIndex = 0; usedByIndex < usedByCount; usedByIndex++) {<br /> DCGSharedCacheClient<T, S, K>* ¤tUsedBy; = element.usedBy[usedByIndex];<br /> if (currentUsedBy == usedBy){<br /> #ifdef _DEBUG<br /> // char temp[80];<br /> // sprintf_s(temp, 80, "releasing 0x%x used by 0x%x on thread 0x%x\n\0", &element;.element, usedBy, GetCurrentThreadId());<br /> // OutputDebugStringA(temp);<br /> #endif<br /> currentUsedBy = NULL;<br /> }<br /> else if (currentUsedBy != NULL) {<br /> newUsedByCount++;<br /> }<br /> }<br /> if (newUsedByCount == 0) {<br /> if (element.active == static_cast<boolean>(true)) {<br /> #ifdef _DEBUG<br /> // char temp[80];<br /> // sprintf_s(temp, 80, "destroying 0x%x on thread 0x%x\n\0", &element;.element, GetCurrentThreadId());<br /> // OutputDebugStringA(temp);<br /> #endif<br /> //nobody is using this anymore, destroy it<br /> element.active = false;<br /> element.usedBy.SetElemCount(0);<br /> usedBy->emptyElement(*element.element);<br /> delete element.element;<br /> element.element = NULL;<br /> }<br /> }<br /> else {<br /> hasActive = true;<br /> }<br /> }<br /> if (!hasActive) {<br /> //if nothing is active, let's clear out cached elements completely<br /> elements.SetElemCount(0);<br /> }<br /> }</p> <p> };</p> <p> T* getCacheItem(const K& key, DCGSharedCacheClient<T, S, K>* usedBy, const S& dataSource) {<br /> //look outside a critical section to see if we find it quickly <br /> //with no contention<br /> uint32 elementCount = elements.GetElemCount();<br /> for (uint32 elementIndex = 0; elementIndex < elementCount; elementIndex++) {<br /> TypePlus& element = elements[elementIndex];<br /> if (element.active && element.key == key) {<br /> uint32 usedByCount = element.usedBy.GetElemCount();<br /> for (uint32 usedByIndex = 0; usedByIndex < usedByCount; usedByIndex++) {<br /> if (element.usedBy[usedByIndex] == usedBy){<br /> //found and this object already cares about it<br /> return element.element;<br /> }<br /> }<br /> //found, but I'm not on the list that cares<br /> //so exit out and drop into our critical section<br /> elementIndex = elementCount;<br /> }<br /> }<br /> //not found, start critical section, search again<br /> //add if we don't find it<br /> CWhileInCS cs(myCS);<br /> elementCount = elements.GetElemCount();<br /> uint32 emptySlot = elementCount;<br /> for (uint32 elementIndex = 0; elementIndex < elementCount; elementIndex++) {<br /> TypePlus& element = elements[elementIndex];<br /> if (element.active && element.key == key) {<br /> uint32 usedByCount = element.usedBy.GetElemCount();<br /> for (uint32 usedByIndex = 0; usedByIndex < usedByCount; usedByIndex++) {<br /> //found and this object already cares about it<br /> if (element.usedBy[usedByIndex] == usedBy){<br /> return element.element;<br /> }<br /> } <br /> //found, but add me to the list of folks who want to <br /> //retain this<br /> #ifdef _DEBUG<br /> // char temp[80];<br /> // sprintf_s(temp, 80, "found 0x%x used by 0x%x on thread 0x%x\n\0", &element;.element, usedBy, GetCurrentThreadId());<br /> // OutputDebugStringA(temp);<br /> #endif<br /> element.usedBy.AddElem(usedBy);<br /> return element.element;<br /> }<br /> else if (!element.active) {<br /> emptySlot = elementIndex;<br /> }<br /> }<br /> //we didn't find anything, use an empty element or add an element<br /> //and fill it by calling back into usedBy<br /> if (emptySlot == elementCount) {<br /> elements.AddElemCount(1);<br /> }<br /> #ifdef _DEBUG<br /> // char temp[80];<br /> // sprintf_s(temp, 80, "creating 0x%x used by 0x%x on thread 0x%x\n\0", &elements;[emptySlot].element, usedBy, GetCurrentThreadId());<br /> // OutputDebugStringA(temp);<br /> #endif<br /> return fillCachedItem(key, usedBy, dataSource, elements[emptySlot]);<br /> };</p> <p><br /> };</p> <p>#endif<br />
In your object's header file you create the cache and the key. Sample from Bevel shader.
<br /> struct BevelCache<br /> {<br /> TMCArray<real32> linemagnitude;<br /> TMCArray<TVector3> edgenormal;<br /> TMCArray<TVector3> pointnormal;<br /> TMCArray<boolean> usepoint;<br /> TMCArray<boolean> drawedge;<br /> IMeshTree* meshtree;<br /> TMCCountedPtr<FacetMesh> mesh; <br /> BevelCache()<br /> {<br /> meshtree = NULL;<br /> pointnormal.SetZeroMem(true);<br /> };<br /> ~BevelCache()<br /> {<br /> cleanup();<br /> };<br /> void cleanup()<br /> {<br /> linemagnitude.ArrayFree();<br /> edgenormal.ArrayFree();<br /> pointnormal.ArrayFree();<br /> usepoint.ArrayFree();<br /> drawedge.ArrayFree();<br /> if (meshtree != NULL)<br /> {<br /> delete meshtree;<br /> meshtree = NULL;<br /> }<br /> mesh = NULL;<br /> };</p> <p>};</p> <p>struct BevelKey {<br /> void* instance;<br /> int32 iSpace;<br /> real32 fVectorAngle;<br /> boolean bEdgeInner;<br /> boolean bEdgeOuter;<br /> boolean bGrowsSafe;<br /> real currentTime;</p> <p> BevelKey()<br /> {<br /> this->instance = NULL;<br /> };</p> <p> void fill(void* instance, BevelPublicData fData, real currentTime)<br /> {<br /> this->instance = instance;<br /> this->iSpace = fData.iSpace;<br /> this->fVectorAngle = fData.fVectorAngle;<br /> this->bEdgeInner = fData.bEdgeInner;<br /> this->bEdgeOuter = fData.bEdgeOuter;<br /> this->bGrowsSafe = fData.bGrowsSafe;<br /> this->currentTime = currentTime;<br /> };</p> <p> boolean operator== (const BevelKey& rhs) <br /> { <br /> return (this->instance == rhs.instance<br /> && this->iSpace == rhs.iSpace<br /> && this->fVectorAngle == rhs.fVectorAngle<br /> && this->bEdgeInner == rhs.bEdgeInner<br /> && this->bEdgeOuter == rhs.bEdgeOuter<br /> && this->bGrowsSafe == rhs.bGrowsSafe<br /> && this->currentTime == rhs.currentTime);<br /> };<br /> };</p> <p>
Next declare your cache. I need a lighting context to build my cache.
</p> <p>extern DCGSharedCache<BevelCache, LightingContext, BevelKey> bevelCache;</p> <p>
Then derive from the cache client and declare the virtuals you need to implement.
<br /> class Bevel : public TBasicShader, public cTransformer<br /> , public IExStreamIO , public EnhanceCBezier<br /> , public DCGSharedCacheClient<BevelCache, LightingContext, BevelKey><br /> {<br /> public:<br /> ...<br /> void fillElement(BevelCache& newElement, const LightingContext& lightingContext);<br /> void emptyElement(BevelCache& oldElement);</p> <p>
In your class file, instance your cache.
</p> <p>DCGSharedCache<BevelCache, LightingContext, BevelKey> bevelCache;</p> <p>
Make sure the cache client gets initialized with the cache.
<br /> Bevel::Bevel():DCGSharedCacheClient<BevelCache, LightingContext, BevelKey>(bevelCache)<br /> {<br /> ...<br />
When you're done, release any cache entries you're looking at. You can also do that when you detect that your cache is no good (e.g. your change listener).
<br /> Bevel::~Bevel(){<br /> releaseCache();<br /> }</p> <p>MCCOMErr Bevel::ExtensionDataChanged(){</p> <p>...</p> <p> if (globalStorageKey.iSpace != fData.iSpace<br /> || globalStorageKey.fVectorAngle != fData.fVectorAngle<br /> || globalStorageKey.bEdgeInner != fData.bEdgeInner<br /> || globalStorageKey.bEdgeOuter != fData.bEdgeOuter<br /> || globalStorageKey.bGrowsSafe != fData.bGrowsSafe)<br /> {<br /> releaseCache();<br /> }<br /> return MC_S_OK;<br /> }</p> <p>
When it's time to use/build your cache call getCache with what you need to build it. You can also release here if you detect a change like I do when I see the time is not the same time this object was called. Release and get will both work in a thread safe way. Get will first try to find the cache without using a critical section, but if it can't find it, it will drop into the critical section, look again, and then call back into you to build it if it's not there. Critical sections are historically REALLY slow on OSX so it's best to avoid them as much as possible. That may be fixed now, but that's how they were when I built all this.
</p> <p>MCCOMErr Bevel::ShadeAndLight2(LightingDetail& result,const LightingContext& lightingContext,I3DShLightingModel* inDefaultLightingModel,TAbsorptionFunction* absorptionFunction)<br /> {<br /> defaultlightingmodel = inDefaultLightingModel;<br /> if (!shader) {<br /> return MC_E_NOTIMPL;<br /> }<br /> const ShadingIn& shadingIn = static_cast<ShadingIn>(*lightingContext.fHit);</p> <p> TMCCountedPtr<I3DShTreeElement> tree;<br /> TMCCountedPtr<I3DShScene> scene;<br /> real currentTime;</p> <p> shadingIn.fInstance->QueryInterface(IID_I3DShTreeElement, (void**)&tree;);<br /> ThrowIfNil(tree);<br /> tree->GetScene(&scene;);<br /> ThrowIfNil(scene);</p> <p> scene->GetTime(¤tTime;);</p> <p><br /> if (cacheElement == NULL || globalStorageKey.instance != shadingIn.fInstance || globalStorageKey.currentTime != currentTime)<br /> {<br /> if (globalStorageKey.currentTime != currentTime)<br /> {<br /> releaseCache();<br /> }<br /> globalStorageKey.fill(shadingIn.fInstance, fData, currentTime);<br /> getCache(lightingContext);<br /> }</p> <p>
After you've run getCache, you can use cacheElement to access your cache.
</p> <p> TVector3 p = shadingIn.fPointLoc;<br /> if (fData.iSpace == SPACE_GLOBAL)<br /> {<br /> p = shadingIn.fPoint;<br /> }</p> <p> real32 mindistance = FPOSINF;</p> <p> IMeshTree* ClosestNode = NULL;<br /> IMeshTree* CurrentNode = NULL;</p> <p> cacheElement->meshtree->FindClosestNode (&ClosestNode;, p, callLocalStorage);</p> <p>
Finally implement your cache building and tear down.
<br /> void Bevel::emptyElement(BevelCache& oldElement) {<br /> oldElement.cleanup();<br /> }</p> <p>void Bevel::fillElement(BevelCache& newElement, const LightingContext& lightingContext) {<br /> BuildCache(newElement, lightingContext);<br /> }<br />
Now, on to the second problem. Thread local storage for state. In my example above, callLocalStorage is a place to save state used for walking the tree. I declared it at the class, but could have easily just have put it in the ShadeAndLight2 call at the cost of some memory allocation time. If you can't do that, take a look at IShLocalStorage. I use it in LightManager to pass data from the lighting model down to child shaders further down the tree.
In my dll file I define it. You might have to define it for your object since you could threads across multiple objects at the same time.
<br /> extern IShLocalStorage* gMangleResult;<br />
In my DLL main file I instance and create it
<br /> IShLocalStorage* gMangleResult = NULL;<br /> void Extension3DInit(IMCUnknown* utilities)<br /> {<br /> gShellSMPUtilities->CreateLocalStorage(&gMangleResult;);</p> <p>
From then on, you just get and set values on it as needed.
<br /> MangleResult mangleResult;<br /> ...<br /> gMangleResult->SetValue(static_cast<void*>(&mangleResult;));</p> <p> switch (mangleMode)<br /> {<br /> //TMCColorRGB based<br /> case mmSpecular:<br /> return static_cast<MangleResult*>(gMangleResult->GetValue())->fSpecularLight;<br /> break;<br /> case mmDiffuse:<br /> return static_cast<MangleResult*>(gMangleResult->GetValue())->fDiffuseLight;<br /> break;<br />
Good luck
Hi Eric,
That is huge. Thanks so much for taking the time to share all that code and wisdom. Its taking me a while to digest it.
I'll certainly need to look into IShLocalStorage and if I can avoid repeated local memory allocation. Some of the methods you have used in getCacheItem should help with debugging too.