Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - CodeRunner

Pages: [1] 2 3 ... 5
I was once told by a member of the Allegorithmic staff that all paths of a float / switch (if/else branches) are executed during a single run of the Pixel Processor, even if only one of the conditions are met. Is this true?

I've noticed that the random node is a good way to test these situations. The outcome changes if random is called twice per pixel instead of once, or three times instead of two, because random always outputs the same results if called in the same order, but doing this changes that order.

When I create several conditional branches, where only one is true, random does not seem to be executed for those branches. This makes me think these branches are actually *not* being executed. Unless Designer is internally prepared for such situations and compensates for it by "ignoring" the random calls.

I would like to know how the optimization works? Does the entire conditional branch get skipped (in a way that would make the number of sample calls irrelevant), or are the function nodes designed to internally skip execution when their branch should be ignored (where they still cost something even when not used)?

Can anyone straighten this out for me? If this is true, it dramatically changes the way I can go about designing graphs.

Thanks for any help!

There is an issue that occurs when switching between two nodes that have many exposed parameters. When the user double clicks, the program has to switch both parameters and 2D display, but the large number of parameters takes a second to load, which prevents the double click from registering.

It's hard to guess, but I think this is happening because the double clicks are not being detected using time stamps. Or perhaps you need an input buffer, or to detect input on a separate thread apart from image processing.

A great feature to have would be an exposed int quadrant member that represents its "depth offset". When the user changes the value +1, the quadrant would behave as if 1 extra empty quadrant was inserted above it, attached to all 4 inputs.

I'm not sure how difficult this would be to program as a feature (maybe very easy), but it would be big in terms of functionality. One could expose the variable and name it scaling, and have instant noise scaling that is similar to the scaling used by the built-in FX noises. It would also make many FX maps simpler - we would only need to create quadrants that render things.

Substance DesignerSubstance Designer - Technical Support - Undo Bug
 on: April 12, 2019, 05:04:42 pm 
Here is a specific undo bug that can be reliably reproduced.  I attached a graph to perform the steps with. Open the pixel processor function and perform these two steps:

1. Attach the const float 1 node to the set(myScale) node
2. Press Ctrl + Z to undo

The graph should turn black and fail to cook. The only way I've found to "repair" it is to break the related connections and reconnect them. Even saving / closing / opening the graph back up won't fix it.

It would be very functional friendly if Designer handled inputs like variables when dealing with things like sampling. When we reference an input image in a graph function, it should reference it by name instead of number. And when we change the inputs around (rename), the samplers should change automatically like variables do.

It's too easy to be in a situation where you have to delete an input to a complex graph, which causes all of your sampler references above it to reference the wrong images. Very easy to miss one, or forget they are buried in functions. It would be nice to have the same system that variables have when one is deleted - which shows the developer where the references are.

To deal with backwards compatibility, you could just create a new type of sampler and mark the old ones as outdated.

Why does the FX Map's iterate node have its own random seed? How is it different than the node's random seed? Is there some way to create a new random seed per iteration?

I've created a shape splattering FX map that uses a single quadrant node with an iterator at the top, which iterates once for every shape drawn. This is the first FX Map I've tried to make that uses a single quadrant node. Each shape starts out on a grid layout (X*Y), which I calculated using the total and $number ($number / total and $number % total). Then each one is randomly offset. And this is where I'm running into trouble. For many node counts, I'm getting strange patterns in the result, as if the randomizer is repeating the same results every so many iterations. I'm almost certain the patterns are caused by the shape positions, because everything else is non-random. Here is an example of one, where you can clearly see line patterns going diagonally through the layout:

The image above was drawn with each shape randomly offset using a range between -0.5 to +0.5 on both X and Y axes. Here is the function used to randomize the shape offsets from their X/Y grid layout:

I've verified that the grid layout is correct without randomizing the positions. But as soon as I start adding random offsets, I keep getting these strange patterns. Does anyone know why, or how to resolve this? Is the FX-Map repeating the same random results in some way?

This may be a known issue, but here it is just in case. When you copy and paste an exposed drop down box control with more than a few options (5+), the pasted version will only show the first few options. The user has to click the '+' button to make the others become visible.

Likely because the internal "size" variable is not being updated during a paste operation.

Painter seems to clamp exposed parameters even when they are marked not to clamp in Designer. I'm assuming this is because the sbar file doesn't support the boolean that controls it. But why not allow users to manually type in a value anyway? Keep your widgets the same as they are now, but allow us to override the value range when we want by manually typing in our out-of-range values. If nothing else, give us an option to enable that warns us of possible dangers when doing this.

There are many filters that specify a min/max range only to make the slider more sensible and useful. These ranges are usually not meant to be a forced rule. For example, your built-in blur filter only goes up to 16, because if it specified 128 as the max, the slider would be almost useless (too touchy).

We need some way to override the value range when it makes sense.

Once a node is imported into a project as a specific type (filter, generator, base material, etc), how do you remove that association so it can be loaded as another type without keeping the previous types?

When I re-import and choose another type, it just adds it to the list, making it both. Reloading the resource doesn't seem to remove its node types. Is there any way to remove the resource or remove its types?

SubstanceSubstance - Discussions - What is the "mask" usage?
 on: April 08, 2019, 02:28:28 pm 
I never know where to ask questions that relate to both Painter and Designer. There is a usage in Designer with the ID "mask", and Painter seems to make use of this, but I can't figure out where the data is streaming to/from? It doesn't seem to be coming from the layer's mask when I create filters. I can apply a white mask and black mask and nothing changes.

Does anyone know what this references?

Organizing exposed parameters, such as sorting them, becomes very cumbersome after you have a great deal of them. We need a way to organize/sort multiple variables at once. It would also help to collapse them down into groups via some type of toggle. This way we could move ungrouped variables while the others are in a condensed state.

One of the worst things about the current setup is having to drag new variables from one end to another when there are 30+ existing variables. If you have 5 new variables at the bottom that need to go to the top, you will likely have to drag each of those 5 variables 4 or 5 times each to reach the top. Each time, scrolling a little further up, dragging one at a time. Something definitely needs done to help with this.

Edit: Another thing that might help is to be capable of choosing which group an exposed variable will belong to when it is created, then automatically sort it into that group, rather than placing it at the bottom. This is not a complex fix, but would make it much easier.


Substance DesignerSubstance Designer - Feature Requests - Document tabs
 on: April 05, 2019, 12:40:53 am 
I've noticed that Designer has a decently complex document tab system that allows users to navigate around to recently viewed graphs. But it only seems to work (as far as I can tell) when the user chooses "open reference" on a graph or function node. There seems to be no way to create these tabs for a graph that isn't being deployed as a sub-graph.

The tab system is so useful that I've been using the open reference command multiple times on the same sub-node, then going through and opening other graphs for each one afterward. This gives me instant access to the few specific graphs I'm dealing with at the moment, regardless of how many things are open in the explorer window.

I think it would be great to give the user the ability to create these tabs without having to use the open reference command. Especially since we don't always have a convenient sub-graph to use for this purpose. Perhaps some type of modifier key that, when held, allows you to open the graph as a new tab when it is double clicked in the explorer window?

If something like this already exists, please let me know.

It would be great to have the capability to instantly select all nodes that contribute to the input of a chosen node. Highlight would be great, selection would be better (maybe shift+hover to highlight, shift+click to select). I'm pretty sure this is something that would be very easy to implement, because the connections are all already there, internally. They are just difficult to split up, visually.

It would help tremendously when reading graphs, because it gives the user instant visualization of the graph up to a certain point within seconds. It would also allow a user to copy partial sections of graphs without manually "weeding" out the rest of the nodes.

For example, if you have a happy accident and create something interesting in the middle of a work, you use this command on the node you like and then copy only the nodes needed to get that result. Then just paste it into a new graph. Or if you decide that some part of your complex graph is being repeated more than once and want to convert it into a sub-graph, it would take only a few clicks and button presses.

Currently, we have to manually trace back links, one at a time, to determine what does what. And depending on how complex the graph is, it can become next to impossible to keep it all straight in your head as you look at one node at a time. Having the ability to instantly highlight/select the entire chain of nodes that lead up to a node in question would provide a lot of clarity.

Has anyone figured out a way or seen any type of graph or pixel processor that can map textures using a normal map? I want to create a filter that inputs a texture and normal map, then morphs the texture based on the normal map in a way that makes it appear to "wrap" around the normals. Something similar to overlaying a texture onto 3D geometry.

I honestly don't have a good reason to do this, except that I think it would make a cool filter that could be used to do some neat things. I've tried projecting the texture pixels using the normal vectors in a pixel processor, but can't seem to come up with anything that looks good. The texture pixels end up too distorted. The problem is that you can't compensate for any stretching you do in one pixel at another pixel because of how the pixel processor works. EG, if you shrink one, you can't expand it somewhere else to make up for it.

Anyone come up with something that does anything neat? Anything that maps texture over height or normals would be interesting. I'm not asking for a graph, just some advice.


I vote for an option that changes timing readouts to percentage readouts, where each readout shows what percentage of processing it is using up for that specific graph. This would make the readouts far less machine-specific. Or even situation specific - if I play music while I work, my nodes end up looking like they take twice as long. This kind of thing makes it tricky to use these numbers in any way other than as relational to each other, and percentage readouts would make this much simpler.

Another idea is to use some common process as a timing unit, such as the time it takes to process a blend node using a specific mode/opacity. Divide all of the readout times by this unit, and call it process units.

As a related note, I noticed that the timings appear to be recording variable changes as part of their total. For example, if you use the preset selector for a node, its timing will be much larger than if you just tweak one of its parameters. It would be great to skip the variable updates during the timing recording, if possible, to make the readouts more accurate. Or maybe just don't update the displayed readout during exposed variable tweaking.

Pages: [1] 2 3 ... 5