Engineers vs. Sales People


You know that table in the lunch room where all your engineers and sales people eat together, chatting and laughing? No? So, the engineers and the sales staff don’t meet in perfect harmony at your company? Well, you are not alone. The struggle in communication between engineers and sales people is far too common, which has been made obvious in this hilarious video.


To the defense of all you sales people out there, there is an equal amount of engineers and developers who has no idea about how sales works either. The products need to be sold for the engineers to get their paychecks in the end. Needless to say, each group is a very important part of a product company. The feeling of frustration between the two groups is not, however.


Let me know if this sounds familiar:

Salesman: I just came back from a meeting with the customer. They are just on the verge of buying our Tx-Truck, but they must have the new exhaust system. That’s fine, right?

Engineer: Well…no. Our new exhaust system is not really compatible with the Tx-Truck.

Salesman: What? Why? Why am I hearing about this now?! Do you have any idea how important this deal is to the company!?

Engineer: It’s in the documentation. Also, I wrote about this in our internal newsletter last month.

Salesman: I have not seen that part of the documentation and don’t have time to read every update you guys send out. I get 100 emails a day!

Engineer: You should have thought about that before you promised the customer something we cannot deliver…

Salesman: *Grunts* This is what the customer wants! This is what we need to close the sale! How can it not work? It is virtually the same as the old one!

Engineer: In fact, the TTH-capacitor of the old truck uses a BHX-system of version 11 that…

Salesman: Let me stop you right there…What can we do to deliver the correct solution to the customer?

Engineer: *Seighs* Outside of promising the customer something we actually DO manufacture? Well, we can use the version 10.9 HHV-system and patch it with a nV-capacitor. That would probably solve it, but it will cost more and take us more time to manufacture.

Salesman: *Seights* Just solve the problem, you are the engineer, I will offer the customer a discount for the delay this will cause…

– The manager appears at the door

Manager: Discount?! What I am hearing about a discount? We can’t offer any more discounts with our profit rate on this product. Didn’t you read my post in the newsletter on how important it is that we maintain our profit rates? Next time, you will take the engineer with you to the next sales meeting!

Engineer & Salesman: Noooo!


Can you expect a salesman to have full, and constantly updated, technical product knowledge? In a perfect world that would, of course, be great but that would also take time from what sales people do best: Sell!


This frustration can be kept to a minimum if the sales people have a CPQ-tool that keeps the detailed product knowledge from the engineers at hand, while discussing the product with the customer. You will no longer need to put out fires, caused by delays and misunderstandings, with discounts. Instead, the sales people will be able to offer the products that you know you can manufacture for a price that you know is profitable.


Who knows, when the sales team and the engineers no longer have to argue about these kind of issues, there may actually be a day when you will walk into the lunch room and find them eating together!

The holy grail of standardization

When a typical company buys a typical software package, something typical happens: A conflict. It’s sort of standard that what’s standard doesn’t meet the company’s standard. So, let me present the boxed solution on how to get out of the out-of-the-box solutions. Yet still remain in the box.

If you ask the IT department about the main criteria when it comes to new software, they probably say it should be out-of-the-box. To the business, this is not an obvious choice. Because if somebody makes something out-of-the-box, the business needs to think outside-the-box to continue working. This poses both a threat and an opportunity.

To standardize or customize: Is that the right question?

To implement something out-of-the-box, you’ll probably need to make some changes to your current processes. Or to put it differently – it’s very unlikely that the out-of-the-box functionality is based on your current processes. As a natural consequence, the way you work must change. This is normally painful to everyone involved but also presents an opportunity to get rid of some chronic pain the organization suffers from on a daily basis.

When we implement an out-of-the-box solution, let me explain what this typically entails. It means compromising and adapting to a standard way of solving standard problems. This is normally a good way of addressing some not-so-unique challenges and standardizing some standard things. This will most likely eliminate some of the laborious processes that currently require heavy painkillers.

But from a change management perspective, you should respect the alterations of your current processes. This can make or break the organization when it comes to out-of-the-box adaptation. Some parts of a CPQ solution tend to be highly standard, but some parts are truly unique. That’s why an out-of-the-box solution often has got to be shaped to support reality.

Back to reality

So, what do we need to remember? Make sure you don’t put your sales force on a strict diet just to get into the new suit. Respect that processes come in many different shapes and sizes.  When you get an out-of-the-box solution, make sure there are ways to tailor the final result. Keep this in mind as you continue to seek the holy grail of standardization.

The new industry standards

CPQ and the new industry standards

You’ve heard the buzz. You’ve read the blogs. And you know that Industry 4.0 and the Internet of Things are going to fast-track industrial manufacturing to a new level. But what do these new paradigms have to do with CPQ software?

The big strategic partners and system integrators like McKinsey, PwC, CapGemini and others are now all talking about transforming business processes to leverage the new technologies and increase speed, efficiency, reliability and flexibility, not only to benefit the company but to meet the customer’s ever-changing needs.


Adapting to mass customization

Technology is not the answer to all questions. It’s what technology can enable and enhance that should support technology investment decisions. Today, technology can help us understand and map customer requirements, from the early design stages of the product to the offering and through to delivery – throughout the entire lifecycle. This is made possible by the Internet of Things.

Monitoring throughout the factory to the installation, together with big data processing, not only offer a way to understand customer and application requirements better. They offer a way to better understand what to develop and how the customer and business segment requirements change over time. The digitization of the economy and presentation of alternatives is changing, and companies must adapt. Your company must adapt, too.


Complex configurations, simple selling

With this new paradigm, as machines get smarter you’ll need a smarter way to sell them. Presenting the product is not about showing the technical requirements in detail. Products simply must fit specific customer needs, and the customer must be able to quickly grasp and see that its needs are being met. So, the entire company must understand the benefits – and work towards this goal. This is what the bigger system integrators are talking about when they encourage companies to transform. Successful companies will reap what they sow: streamlined business and production to meet the new market criteria. For example, customers have got to know whether what they are being offered is a standard off-the-shelf product or custom built for specific needs. Companies must assess their customers’ requirements and see how they fit into the context of the business. This isn’t new. Just take IKEA: the customer need for flat packages developed into an entire business model.

Now the focus is on using computers to achieve a more in-depth understanding. Evaluating the available technologies and how they can be used to further broaden product development is a must. All this streamlining of the organization must be concentrated and presented in the best possible way to the customer.

If you’re using a product configurator that lets users consistently work in an environment that they can’t fail in, offering only valid solutions for the choices they make, in an optimized solution space, then you’ve built a solid foundation for customer satisfaction. Combine this with a platform available to support dynamic document generation, price management and quotation workflow, and you possess the tools to change your business offering and increase opportunities.

The Hero

The Hero’s Quest

The lights dim slowly. The movie is just about to begin. Once again, I will watch the same old story unfold before my eyes.

It’s the classical theme of the good fighting the bad. There’s a heroic mission to accomplish, and a villain to defeat in the final battle.

It always begins in a quite ordinary, predictable way. But suddenly, the soon-to-be-hero realizes something strange is happening. Struck by an epiphany, the hero can do things that no one else can. Something magical now resides within this person and this new superpower comes in handy, because all of a sudden something exists that threatens the ordinary.

But first: This magical force needs to be explored and the hero will need some training to fully master this new superpower. Out of the blue, an old master appears to give guidance and the appropriate training. This will help the hero stand ready to defeat that big threat.

At this point, our hero normally faces some great difficulties and some minor defeats, which teach a valuable lesson. But this is just a theatrical way of setting the scene for the ultimate battle.

In the ultimate battle all magic superpowers are put to the test, and finally the hero defeats the villain. Just like Harry defeated Voldemort or Luke defeated his father.

Then everything returns to normal. All threats are just a distant memory of the past (and this holds true until there’s a sequel – then the same story is told once again).

The CPQ quest

So back to my movie. It’s one of the best ones I have ever seen. It’s called the Call Pro Quest.

It’s about this heroic girl who realizes the ineffectiveness of her fellow sales reps, meets a configure-price-quote (CPQ) guru and gets her master training in a modern constraint-solving piece of software. After a few battles with fearsome product managers and back-stabbing sales executives, she finally proves there is a way of eliminating all order errors and thereby overcomes the threat of the evil competitor. She saves the company and lives happily ever after.

This movie might not be coming to theaters near you in the summer of 2016. But it’s definitely based on a true story, and you can be the hero.

Ready to face your innermost fears and win an epic battle? Tacton and I will be there to support you. We have lived this movie so many times and we know from experience that it will be a smash hit.

Configuration and football

The optimization dilemma

If you live in Europe you probably know about the ongoing 2016 UEFA European championship in football*. In every corner of Europe people are hoping, cheering and shouting together. In fact, I have to admit that during game days my brain is not entirely focused on CPQ. Instead, a significant portion of it is dedicated towards football – and I don’t think I am alone!

The other day Sweden played a really bad 1-1 game against Ireland. After the game a few brain cells, that were still geared towards CPQ, formed an idea about configuration and football. The aspect that was going through my head was whether Mr Erik Hamrén (Sweden’s national team coach) would have any use of a CPQ system to select his starting eleven? If so, how would he use it?

I mean, alsmot every dedicated football fan has his or her own idea about the line-up the national team should use. After the disappointing game against Ireland everyone had a solution on how the line-up against Italy should be, in order to fix all the issues that surfaced.

Mr Hamrén, who selects the starting squad sure had a delicate issue at hand. Should he optimize the defence against the powerful Italians, despite the fact that we did not have a single shot on goal against Ireland? Should he start with reliable players like Sebastian Larsson over more creative players like Erkan Zengin?

Looking at it from a mathematical and CPQ-perspective there are over a million possible line-ups he could choose from. If Mr Hamrén would consider each of these it would not only render him sleepless but also jobless, since no coach would ever consider each player for each position. You would not put Sweden Mega Star Zlatan Ibrahimovic as a goal keeper, defender or even on the bench, right?

A configuration approach to football

So, how could this problem be ”solved” with a configurator? My thought on this would be to categorize each player in their main Player Type: Goal keeper, Defender, Midfielder and Striker. This would limit our options right away without any real trade-offs.

Most defenders and midfielders are specialized in one position, like central position or the wing position. If you have a wing position you are typically specialized towards either the left or the right wing. Still, these positions are more volatile than your Player Type. Thus, I would instead place a grade on how suitable a certain person is for each position and use this in the optimization. For example left wing defender Martin Olsson is a highly specialized wing player and would score high as ”Wing” but low as ”Central” etc. Thus, there could be a line-up in which he would be optimized as a central defender but it is highly unlikely.

Then you have to mark each player up according to skills like: speed, shot accuracy, passing accuracy, header skills etc and map that agains the tactic you want to optimize for, like: ”Defensive stand with counter attacks”. You also need to take into consideration the player’s current shape, possible injuries, yellow cards and how his style is combined with the players around him…and…and…Phew!

I would love to prove my point on how aweseome this Line-Up Configurator would be and how it would out-perform Mr Hamrén but I guess CPQ should probably be used where it is best: for manufacturing, and… well… there is a new game soon and I don’t want to miss it!


Heja Sverige! (i.e. ”Go Sweden”).


* as you may know, in Europe soccer is referred to as football


Guided selling – the napkin of the 21st century

How people do business

Let me tell you the ancient truth according to an even more ancient man I once met. “I always try to visualize the final solution for the customer. All I ever needed was a pen and a piece of paper.” Just to prove his point, he grabbed a napkin and sketched out a few words that were truly impossible to read. “Everything that’s important to the customer can always be written on a napkin,” he said in a subdued yet authoritative voice.

“And, um, what about order management and pricing?” asked the curious business consultant (that would be me…). “I just gave this sketch to our engineers and they figured out what to deliver. The customer is always right you know, so they just needed to solve it and give me a price. And if the price was too high we could always increase it to get the desired discount.” Working with sales processes, I had to admit that this was a very effective way of doing business. Just write down some overall requirements, promise to deliver and adjust the price to match expectations. The problem is that this process is optimized for a single person (in this case, the sales rep). It will always be sub-optimal for everybody else in the company.

Guided selling brings simplicity back in style

The core problem of this approach is that not many companies can afford to operate in this old fashion nowadays. Competition is razor sharp, the need for standardization is crucial and speed gives an important competitive edge. Because of its ineffectiveness for the organization as a whole, the napkin is a thing of the past. But let’s not forget it’s a proven way of doing business.

When discussing guided selling solutions with customers, I often refer to the napkin and how we adapt it to the 21st century. What would you write on a napkin? Whatever the answer to this question may be, that is what should be included in the guided selling app. The napkin in the 21st century enables sticking to what’s essential without complicating matters with awkward technical questions. The trick to ditching the details is to let them act silently in the background. In other words, let the app focus on customer needs while the CPQ keeps track of all the techie parts.

This is how we bring the napkin back in business – as a role model for simplicity. The only difference? We make sure it’s aligned with an efficient, modern way of doing business.


Author’s note: All ancient persons in this blog post are fictional, and any similarities to people I have met are somewhat accidental.

Why did my .count constraints start performing badly in 4.5?

Since long back, multiple .count constraints over attributes with the same name, for instance and, can be made more engine efficient by adding the auxiliary property &opt_propagation with the value yes to a step. The separate constraints present in the TCX model will then be merged by the engine (not in the TCX file, but when the engine’s internal representation of the step is built) into a single global constraint which will in most cases improve performance in propagation greatly compared to considering the individual count constraints separately, because the configurator can understand that if the total number of attributes x is 11, then the sum of the counts of all the possible values for x can never exceed 11. Without opt_propagation the engine will eventually come to this conclusion too, but much later. However, if one occurence of one or our attributes named x has a very large domain (for instance int) but could in practice only have a value that corresponds with one of the count constraint values , this slows down the entire multi-value constraint, whereas it would only have slowed down the constraint corresponding with that value if we hadn’t been using opt_propagation. Starting with 4.5, opt_propagation is activated by default, so if you get bad performance in models with .count constraints, you could check if you have any such attributes with unnecessarily large domains. If de-activating the part or temporarily reducing the domain of the attributes helps, you probably have identified the problem.

Delaying constraints while troubleshooting

When troubleshooting, it is often advantageous to add new constraints to get an engine state more suited to investigation. For best effect, these constraints should be placed first in the top part, not using “and”, “or”, “->”, or “<->”. This will ensure that the engine implements these constraints first, before doing anything else. To avoid having these new constraints cause the engine to fail in an earlier step than the one investigated, do the following: In the top part, add the attribute ONE, with the value 1. Add a field for the attribute ONE in the step before the one being investigated. Add the part ZERO, with Number of Instances=top_part_attr(ONE). In the part ZERO, add the attribute ZERO, with the value 0. From now on, whenever a new constraint causes the engine to fail too early, add ”+ZERO.ZERO” to the end of the constraint. This is not recommended for permanent constraints, who are supposed to remain in the finished model. Those constraints should work from the first step where all attributes actually used exists. The exception to this is when they are collection constraints over parts not yet in existance. In that case, the constraint should be connected in some way to one of those parts, rather than a part created just for that purpose.

How to Set Default Values in TCstudio

A recurring task in modeling is to set default values. There are several ways to do that, let’s clear up what they are and when to use them:

Component Order*

Components closer to the top will be preferred before those further down. (This is really due to the default search strategy, nonetheless it is a useful trick to use.) By rearranging the components, you can influence what component will be prioritized: Component order – vbrakes on top Component order – disc_brakes on top All other things equeal – in the left case “vbrakes” will be selected before “disc_brakes”, in the right case “disc_brakes” will be preferred. Use it: for simple default behaviors. *Component order is really just a special case of “domain order”, but a very common and useful case. See end of blog post for an overview of the domain order for the other domains.

Search Strategies

By using Try in a search strategy you can ask the engine to set a specific value. If the value is not allowed because of other constraints or choices that the user has made, the engine will just ignore that action and move on to the next one. Search strategy actions are prioritized from to top to bottom. Want to change the order? Just drag the actions around (dragging works since TCstudio 4.5, in previous versions you can use the arrows next to the editor field). Custom search strategy with two 'Try' actions. Two Try actions to set front brake type and drive train type. Use it: to have full control of the default values. Also see Sebastian Dahlin’s blog post Search Strategy Default Values Using “Call” and “If”.

Search Strategies + Help Variable

Sometimes you want the default value to depend on another field or on some calculation, but the “Try” action can only set a fixed value. To solve this you can introduce a help variable that turns a constraint on or off, where the constraint is used to set the actual default value. An example: Search Strategy action trying to set help attribute "opt_brakesDifferent" to "No". Our search strategy will try to set the help attribute “opt_brakesDifferent” to “No”… Help attribute and constraint to set front and back brakes to the same type. …which will succeed only if it is possible to set the front and back brakes to the same type. If not, the configuration engine will continue as if nothing happened. Tip: Name your help attribute so that answering “No” will give you the desired default value. In our case we used “opt_brakesDifferent” instead of “opt_sameBrakeType”), as “No” is the default Boolean value.

Soft Function (~=) Constraint

A soft function means the engine will try to assign the value specified, but if it can’t it’ll just move on and not cause any trouble. In this way a soft function is very much like a Try in a search strategy, with two big differences:

type~=vbrakes A simple example of a soft function setting a default value (rear brake type is defaulted to V-brakes). Use it: for models with few default values that won’t conflict with each other (or where their individual order doesn’t matter), or for quick tests of new default values. Also see the “Performance” section below. Note: Function atributes/features that are not assigned in any other way need to be set using a soft function (~=) or a hard function (:=, the equivalent of a = but for function attributes/features), otherwise the engine won’t know what value to assign to the function and will refuse to start. Also, in some cases where the function attributes/features are part of complicated calculations, setting a default value for them will help the engine find a solution faster.

Default Value

The “Default Value” checkbox (a.k.a. “Hard Default”) doesn’t just set a default value – it also commits the value. This means that the value can only change if the user changes it OR if there is a conflict with another value and the engines asks the user for permission to change the value (conflict resolution). This differs from the other default values which are suggestions rather than forced values, and it can have unexpected consequences.   Screenshot of the Default Value option. The thing is, if you have two default values that are in conflict (directly or indirectly) at startup, the model won’t run as the engine isn’t allowed to resolve the conflict on its own. ⚠ Therefore – unless you specifically want to commit a value on beforehand – don’t use the “Default Value” checkbox, use search strategies or soft defaults instead. Simple example model that crashes on startup because of unresolvable Default Value conflict.


Priority illustration, with numbers The engine will prioritize the defaults mentioned above as follows (first on list happens takes precedence over anything that comes later):

  1. Default Value (a.k.a. “hard default”)
  2. Soft function (a.k.a. “soft default”)
  3. Search strategy
  4. Component order (Domain order)

Within each method, the prioritization looks like this:

Component Order

If you are using the default search strategy (ffc_all_vars, which you use if you haven’t explicitly set any search strategy) the engine will start looking at the attributes with the smallest domains and most constraints and for each attribute it will try the domain elements (that is the components if the domain is a class) in ascending order as described above. So for the first field the engine looks at, let’s call it A_field, it will select the first component. For the second field it looks at, let’s call it B_field, it will pick the the first component available unless it is in conflict with the already selected component for A_field, then engine will try the second component, then the third and so on until it finds a valid component for B_field. However, if the model is set up so that the engine prefers starting with B_field, then the first available component will be selected there, which will then affect what components are available for A_field (and all the other fields in the model).

Soft Functions (a.k.a. Soft Defaults)

Soft functions will be executed in the order they appear in the model: start looking at the root part, then move down the part tree depth-first. On each part, start with the first soft function in the constraints list and then go down. Any soft function constraint that cannot be fulfilled will be skipped over.

Search Strategy Actions

As mentioned before, search strategy actions are tried in the order they are listed in the search strategy, any Try value that cannot be assigned (e.g. if the field has been committed to a different value by the user) is skipped, any subactions to that Try action will also be skipped then. Later values can never overwrite earlier values.

Default Values (a.k.a. Hard Defaults)

Default Value commits aren’t prioritized since they are commits. If they can be set, they will be set. If two Default Values conflict with each other, the model won’t even start.

Performance Impact

Performance illustration Component order doesn’t really impact performance. In some cases picking one component over another significantly limits the solution space, and by putting it at the top of the list the engine will find a solution faster – or by moving it down the engine will need more time. But in most practical cases you will not notice any difference in model performance if you order your components in the order you want them to default. Search strategies can be built to increase the performance of the model or, built the wrong way they can also decrease the performance. Most of the time, however, using search strategies to set a number of default values will not impact performance noticeably. Soft functions are behind the scenes the same thing as search strategies, just that a soft function will always be applied before the search strategy actions. Still, if you have a lot of soft functions, step changes (and model startup) can be a bit slower than if you set the corresponding default values with search strategies. Default Value (a.k.a. “hard default”) – each default value that is set will make the model a little slower, just as each commit by a user will make the model a little slower (remember, the “Default Value” checkbox actually works by committing a value). However, this will only be noticeable once you have a lot of them (as I said: they are commits and when was the last time you noticed the model was slowed down by you committing too many fields?). However, step changes may be up to twice as slow when using “Default Value” since the engine will have to run certain calculations twice if you have introduced at least one Default Value.

What to Use When

  What to use when, v2 In small and simple models, the component order may be enough to set the default values you want. However, it’s not very transparent which defaults will be prioritized when they conflict – if you need to control what default values are more important than others, it’s better to create a custom search strategy with Try actions ordered in priority order. If you need defaults that are not fixed values, for example defaulting a field to the value of another field, use search strategies combined with a help variable as explained above. Soft functions (a.k.a. “soft defaults”) are easy to use in small/simple models and will override both component order and search strategies. However, just as for component order, you cannot control the priority of the defaults in case they conflict with each other (to do that, use search strategies). For large amounts of defaults – use search strategies instead to ensure good performance. Soft functions can also be necessary to set initial values for postcalc fields if they are involved in complex calculations with many dependencies, so that the engine knows where to start. The “Default Value” setting shouldn’t be used to set defaults unless you want the value to be fixed until the user changes it. All the other ways to set defaults will suggest values for the engine to use, but setting “Default Value” commits the value, which means the engine is forced to use it. If you use this for several fields, where the values for the different fields are in conflict with each other, then the model won’t even start.

Addendum – Domain Order

As mentioned in “Compontent order” above, the component order is really just a special case of “domain order”. Like components, the other domains will also be searched in ascending order (unless the search strategy specifies otherwise), i.e. from the first to the last:

Named Domains

In named domains, the domain element with the lowest value assigned to it will be tried first. For Booleans this means “No” will be the default element since it corresponds to the value “0”, whereas “Yes” corresponds to “1”. Screenshot of named domain "boolean" and its values. The boolean named domain and its values. For other named domains it depends on the values you have assigned to the elements. Of course you can change the priority/order of elements by changing their values, but if there is a natural value to use for each element (e.g. the int value of the flange size in a named domain listing flanges) it’s better to stick with those values and use a search strategy or soft functions to set the default values. Otherwise you risk confusing anyone else working with the model (or yourself). Screenshot of named domain "size_nd" and its values. A named domain called “size_nd”, with values that suit the elements.


Integer attributes/fields will start at -10,000,000 and go up from there – unless there is some constraint that sets another min value (which is why it’s good to set limits on int attributes/features if you know they only have a certain range, otherwise you make the engine try pointlessly low numbers (or high, if your search strategy searches in descending order)).


For float attributes/fields only the min and the max values will be checked (in that order, unless the search strategy specifies “descending order” of course).


Functions can take any content so there is no way for the engine to try values for them directly, therefore the “domain order” concept doesn’t apply to them.

Semi-new feature in TCstudio 4.5.2: Generate report in configuration runtime

TCstudio 4.5.2 was released just a few weeks ago. In this blog post I’d like to focus on a semi-new feature that was released in this version. In TCstudio runtime, under the options tab there is an option called Generate Report. GenerateReport This is nothing new, it’s been there for quite a while. What’s new is the possibility to save this report to a file, or to copy it. Save In this report you get the complete part structure with all attributes and their values in XML, which is basically the same as the function called View Part Tree. So why would you use this instead of View Part Tree? The reason is control. With View Part Tree it’s very easy to find an attribute you’re interested in – and see how it changes. With the complete XML you can instead watch the whole part structure change – and use XML editors to analyze the changes. One thing I use it for quite commonly is to compare the result of two version of a model. If I do I change, I don’t just want to test if it’s doing what I expected, I also want to check that’s it’s not doing something unexpected. And by comparing two complete XML reports – I can find all changes to the part structure.